public class AnalyzeTable
extends org.apache.spark.sql.catalyst.plans.logical.Command
implements org.apache.spark.sql.execution.RunnableCommand, scala.Product, scala.Serializable
Right now, it only supports Hive tables and it only updates the size of a Hive table in the Hive metastore.
Constructor and Description |
---|
AnalyzeTable(String tableName) |
Modifier and Type | Method and Description |
---|---|
scala.collection.Seq<Row> |
run(SQLContext sqlContext) |
String |
tableName() |
childrenResolved, cleanArgs, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$Logging$$log__$eq, org$apache$spark$Logging$$log_, org$apache$spark$sql$catalyst$plans$logical$LogicalPlan$$resolveAsColumn, org$apache$spark$sql$catalyst$plans$logical$LogicalPlan$$resolveAsTableColumn, resolve, resolve, resolve$default$3, resolveChildren, resolveChildren$default$3, resolved, resolveGetField, sameResult, statePrefix, statistics
expressions, inputSet, missingInput, org$apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionDown$1, org$apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionUp$1, outputSet, printSchema, references, schema, schemaString, simpleString, transformAllExpressions, transformExpressions, transformExpressionsDown, transformExpressionsUp
apply, argString, asCode, collect, fastEquals, flatMap, foreach, foreachUp, generateTreeString, getNodeNumbered, makeCopy, map, mapChildren, nodeName, numberedTreeString, origin, otherCopyArgs, stringArgs, toString, transform, transformChildrenDown, transformChildrenUp, transformDown, transformUp, treeString, withNewChildren
productArity, productElement, productIterator, productPrefix
initializeIfNecessary, initializeLogging, log_
public String tableName()
public scala.collection.Seq<Row> run(SQLContext sqlContext)
run
in interface org.apache.spark.sql.execution.RunnableCommand