public class ZippedPartitionsRDD3<A,B,C,V> extends ZippedPartitionsBaseRDD<V>
Constructor and Description |
---|
ZippedPartitionsRDD3(SparkContext sc,
scala.Function3<scala.collection.Iterator<A>,scala.collection.Iterator<B>,scala.collection.Iterator<C>,scala.collection.Iterator<V>> f,
RDD<A> rdd1,
RDD<B> rdd2,
RDD<C> rdd3,
boolean preservesPartitioning,
scala.reflect.ClassTag<A> evidence$5,
scala.reflect.ClassTag<B> evidence$6,
scala.reflect.ClassTag<C> evidence$7,
scala.reflect.ClassTag<V> evidence$8) |
Modifier and Type | Method and Description |
---|---|
void |
clearDependencies()
Clears the dependencies of this RDD.
|
scala.collection.Iterator<V> |
compute(Partition s,
TaskContext context)
:: DeveloperApi ::
Implemented by subclasses to compute a given partition.
|
scala.Function3<scala.collection.Iterator<A>,scala.collection.Iterator<B>,scala.collection.Iterator<C>,scala.collection.Iterator<V>> |
f() |
RDD<A> |
rdd1() |
RDD<B> |
rdd2() |
RDD<C> |
rdd3() |
getPartitions, getPreferredLocations, partitioner, rdds
aggregate, cache, cartesian, checkpoint, checkpointData, coalesce, collect, collect, collectPartitions, computeOrReadCheckpoint, conf, context, count, countApprox, countApproxDistinct, countApproxDistinct, countByValue, countByValueApprox, creationSite, dependencies, distinct, distinct, doCheckpoint, doubleRDDToDoubleRDDFunctions, elementClassTag, filter, filterWith, first, flatMap, flatMapWith, fold, foreach, foreachPartition, foreachWith, getCheckpointFile, getCreationSite, getNarrowAncestors, getStorageLevel, glom, groupBy, groupBy, groupBy, id, intersection, intersection, intersection, isCheckpointed, isEmpty, iterator, keyBy, map, mapPartitions, mapPartitionsWithContext, mapPartitionsWithIndex, mapPartitionsWithSplit, mapWith, markCheckpointed, max, min, name, numericRDDToDoubleRDDFunctions, partitions, persist, persist, pipe, pipe, pipe, preferredLocations, randomSplit, rddToAsyncRDDActions, rddToOrderedRDDFunctions, rddToPairRDDFunctions, rddToSequenceFileRDDFunctions, reduce, repartition, retag, retag, sample, saveAsObjectFile, saveAsTextFile, saveAsTextFile, setName, sortBy, sparkContext, subtract, subtract, subtract, take, takeOrdered, takeSample, toArray, toDebugString, toJavaRDD, toLocalIterator, top, toString, treeAggregate, treeReduce, union, unpersist, zip, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipWithIndex, zipWithUniqueId
initializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
public ZippedPartitionsRDD3(SparkContext sc, scala.Function3<scala.collection.Iterator<A>,scala.collection.Iterator<B>,scala.collection.Iterator<C>,scala.collection.Iterator<V>> f, RDD<A> rdd1, RDD<B> rdd2, RDD<C> rdd3, boolean preservesPartitioning, scala.reflect.ClassTag<A> evidence$5, scala.reflect.ClassTag<B> evidence$6, scala.reflect.ClassTag<C> evidence$7, scala.reflect.ClassTag<V> evidence$8)
public scala.Function3<scala.collection.Iterator<A>,scala.collection.Iterator<B>,scala.collection.Iterator<C>,scala.collection.Iterator<V>> f()
public scala.collection.Iterator<V> compute(Partition s, TaskContext context)
RDD
public void clearDependencies()
RDD
UnionRDD
for an example.clearDependencies
in class ZippedPartitionsBaseRDD<V>