public class AnalyzeTable extends SparkPlan implements LeafNode, Command, scala.Product, scala.Serializable
Right now, it only supports Hive tables and it only updates the size of a Hive table in the Hive metastore.
Constructor and Description |
---|
AnalyzeTable(String tableName) |
Modifier and Type | Method and Description |
---|---|
HiveContext |
hiveContext() |
scala.collection.Seq<scala.runtime.Nothing$> |
output() |
String |
tableName() |
codegenEnabled, execute, executeCollect, makeCopy, outputPartitioning, requiredChildDistribution
expressions, inputSet, missingInput, org$apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionDown$1, org$apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionUp$1, outputSet, printSchema, references, schema, schemaString, simpleString, statePrefix, transformAllExpressions, transformExpressions, transformExpressionsDown, transformExpressionsUp
apply, argString, asCode, children, collect, fastEquals, flatMap, foreach, generateTreeString, getNodeNumbered, map, mapChildren, nodeName, numberedTreeString, otherCopyArgs, stringArgs, toString, transform, transformChildrenDown, transformChildrenUp, transformDown, transformUp, treeString, withNewChildren
execute, executeCollect
productArity, productElement, productIterator, productPrefix
initializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
public String tableName()
public HiveContext hiveContext()
public scala.collection.Seq<scala.runtime.Nothing$> output()