Class DataFrameWriterV2<T>
- All Implemented Interfaces:
CreateTableWriter<T>
,WriteConfigMethods<CreateTableWriter<T>>
Dataset
to external storage using the v2 API.
- Since:
- 3.0.0
-
Method Summary
Modifier and TypeMethodDescriptionvoid
append()
Append the contents of the data frame to the output table.void
create()
Create a new table from the contents of the data frame.void
Create a new table or replace an existing table with the contents of the data frame.Add a write option.Add write options from a Java Map.Add write options from a Scala Map.void
Overwrite rows matching the given filter condition with the contents of the data frame in the output table.void
Overwrite all partition for which the data frame contains at least one row with the contents of the data frame in the output table.partitionedBy
(Column column, Column... columns) partitionedBy
(Column column, scala.collection.immutable.Seq<Column> columns) Partition the output table created bycreate
,createOrReplace
, orreplace
using the given columns or transforms.void
replace()
Replace an existing table with the contents of the data frame.tableProperty
(String property, String value) Add a table property.Specifies a provider for the underlying output data source.Methods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface org.apache.spark.sql.WriteConfigMethods
option, option, option
-
Method Details
-
append
public void append() throws org.apache.spark.sql.catalyst.analysis.NoSuchTableExceptionAppend the contents of the data frame to the output table.If the output table does not exist, this operation will fail with
NoSuchTableException
. The data frame will be validated to ensure it is compatible with the existing table.- Throws:
org.apache.spark.sql.catalyst.analysis.NoSuchTableException
- If the table does not exist
-
create
public void create()Description copied from interface:CreateTableWriter
Create a new table from the contents of the data frame.The new table's schema, partition layout, properties, and other configuration will be based on the configuration set on this writer.
If the output table exists, this operation will fail with
TableAlreadyExistsException
.- Specified by:
create
in interfaceCreateTableWriter<T>
-
createOrReplace
public void createOrReplace()Description copied from interface:CreateTableWriter
Create a new table or replace an existing table with the contents of the data frame.The output table's schema, partition layout, properties, and other configuration will be based on the contents of the data frame and the configuration set on this writer. If the table exists, its configuration and data will be replaced.
- Specified by:
createOrReplace
in interfaceCreateTableWriter<T>
-
option
Description copied from interface:WriteConfigMethods
Add a write option.- Specified by:
option
in interfaceWriteConfigMethods<T>
- Parameters:
key
- (undocumented)value
- (undocumented)- Returns:
- (undocumented)
-
options
Description copied from interface:WriteConfigMethods
Add write options from a Scala Map.- Specified by:
options
in interfaceWriteConfigMethods<T>
- Parameters:
options
- (undocumented)- Returns:
- (undocumented)
-
options
Description copied from interface:WriteConfigMethods
Add write options from a Java Map.- Specified by:
options
in interfaceWriteConfigMethods<T>
- Parameters:
options
- (undocumented)- Returns:
- (undocumented)
-
overwrite
public void overwrite(Column condition) throws org.apache.spark.sql.catalyst.analysis.NoSuchTableException Overwrite rows matching the given filter condition with the contents of the data frame in the output table.If the output table does not exist, this operation will fail with
NoSuchTableException
. The data frame will be validated to ensure it is compatible with the existing table.- Parameters:
condition
- (undocumented)- Throws:
org.apache.spark.sql.catalyst.analysis.NoSuchTableException
- If the table does not exist
-
overwritePartitions
public void overwritePartitions() throws org.apache.spark.sql.catalyst.analysis.NoSuchTableExceptionOverwrite all partition for which the data frame contains at least one row with the contents of the data frame in the output table.This operation is equivalent to Hive's
INSERT OVERWRITE ... PARTITION
, which replaces partitions dynamically depending on the contents of the data frame.If the output table does not exist, this operation will fail with
NoSuchTableException
. The data frame will be validated to ensure it is compatible with the existing table.- Throws:
org.apache.spark.sql.catalyst.analysis.NoSuchTableException
- If the table does not exist
-
partitionedBy
-
partitionedBy
public CreateTableWriter<T> partitionedBy(Column column, scala.collection.immutable.Seq<Column> columns) Description copied from interface:CreateTableWriter
Partition the output table created bycreate
,createOrReplace
, orreplace
using the given columns or transforms.When specified, the table data will be stored by these values for efficient reads.
For example, when a table is partitioned by day, it may be stored in a directory layout like:
table/day=2019-06-01/
table/day=2019-06-02/
Partitioning is one of the most widely used techniques to optimize physical data layout. It provides a coarse-grained index for skipping unnecessary data reads when queries have predicates on the partitioned columns. In order for partitioning to work well, the number of distinct values in each column should typically be less than tens of thousands.
- Specified by:
partitionedBy
in interfaceCreateTableWriter<T>
- Parameters:
column
- (undocumented)columns
- (undocumented)- Returns:
- (undocumented)
-
replace
public void replace()Description copied from interface:CreateTableWriter
Replace an existing table with the contents of the data frame.The existing table's schema, partition layout, properties, and other configuration will be replaced with the contents of the data frame and the configuration set on this writer.
If the output table does not exist, this operation will fail with
CannotReplaceMissingTableException
.- Specified by:
replace
in interfaceCreateTableWriter<T>
-
tableProperty
Description copied from interface:CreateTableWriter
Add a table property.- Specified by:
tableProperty
in interfaceCreateTableWriter<T>
- Parameters:
property
- (undocumented)value
- (undocumented)- Returns:
- (undocumented)
-
using
Description copied from interface:CreateTableWriter
Specifies a provider for the underlying output data source. Spark's default catalog supports "parquet", "json", etc.- Specified by:
using
in interfaceCreateTableWriter<T>
- Parameters:
provider
- (undocumented)- Returns:
- (undocumented)
-