public class AnalyzeTable extends SparkPlan implements LeafNode, Command, scala.Product, scala.Serializable
Right now, it only supports Hive tables and it only updates the size of a Hive table in the Hive metastore.
| Constructor and Description |
|---|
AnalyzeTable(String tableName) |
| Modifier and Type | Method and Description |
|---|---|
HiveContext |
hiveContext() |
scala.collection.Seq<scala.runtime.Nothing$> |
output() |
String |
tableName() |
codegenEnabled, execute, executeCollect, makeCopy, outputPartitioning, requiredChildDistributionexpressions, inputSet, missingInput, org$apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionDown$1, org$apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionUp$1, outputSet, printSchema, references, schema, schemaString, simpleString, statePrefix, transformAllExpressions, transformExpressions, transformExpressionsDown, transformExpressionsUpapply, argString, asCode, children, collect, fastEquals, flatMap, foreach, generateTreeString, getNodeNumbered, map, mapChildren, nodeName, numberedTreeString, otherCopyArgs, stringArgs, toString, transform, transformChildrenDown, transformChildrenUp, transformDown, transformUp, treeString, withNewChildrenexecute, executeCollectproductArity, productElement, productIterator, productPrefixinitializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarningpublic String tableName()
public HiveContext hiveContext()
public scala.collection.Seq<scala.runtime.Nothing$> output()