public class Exchange extends SparkPlan implements UnaryNode, scala.Product, scala.Serializable
Constructor and Description |
---|
Exchange(org.apache.spark.sql.catalyst.plans.physical.Partitioning newPartitioning,
SparkPlan child) |
Modifier and Type | Method and Description |
---|---|
SparkPlan |
child() |
RDD<org.apache.spark.sql.catalyst.expressions.Row> |
execute()
Runs this query returning the result as an RDD.
|
org.apache.spark.sql.catalyst.plans.physical.Partitioning |
newPartitioning() |
scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> |
output() |
org.apache.spark.sql.catalyst.plans.physical.Partitioning |
outputPartitioning()
Specifies how data is partitioned across different nodes in the cluster.
|
codegenEnabled, executeCollect, makeCopy, requiredChildDistribution
expressions, inputSet, missingInput, org$apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionDown$1, org$apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionUp$1, outputSet, printSchema, references, schema, schemaString, simpleString, statePrefix, transformAllExpressions, transformExpressions, transformExpressionsDown, transformExpressionsUp
apply, argString, asCode, children, collect, fastEquals, flatMap, foreach, generateTreeString, getNodeNumbered, map, mapChildren, nodeName, numberedTreeString, otherCopyArgs, stringArgs, toString, transform, transformChildrenDown, transformChildrenUp, transformDown, transformUp, treeString, withNewChildren
productArity, productElement, productIterator, productPrefix
initializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
public Exchange(org.apache.spark.sql.catalyst.plans.physical.Partitioning newPartitioning, SparkPlan child)
public org.apache.spark.sql.catalyst.plans.physical.Partitioning newPartitioning()
public SparkPlan child()
child
in interface org.apache.spark.sql.catalyst.trees.UnaryNode<SparkPlan>
public org.apache.spark.sql.catalyst.plans.physical.Partitioning outputPartitioning()
SparkPlan
outputPartitioning
in class SparkPlan
public scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output()
output
in class org.apache.spark.sql.catalyst.plans.QueryPlan<SparkPlan>