Class DataFrameWriterV2<T>

Object
org.apache.spark.sql.DataFrameWriterV2<T>
All Implemented Interfaces:
CreateTableWriter<T>, WriteConfigMethods<CreateTableWriter<T>>

public final class DataFrameWriterV2<T> extends Object implements CreateTableWriter<T>
Interface used to write a Dataset to external storage using the v2 API.

Since:
3.0.0
  • Method Details

    • append

      public void append() throws org.apache.spark.sql.catalyst.analysis.NoSuchTableException
      Append the contents of the data frame to the output table.

      If the output table does not exist, this operation will fail with NoSuchTableException. The data frame will be validated to ensure it is compatible with the existing table.

      Throws:
      org.apache.spark.sql.catalyst.analysis.NoSuchTableException - If the table does not exist
    • create

      public void create()
      Description copied from interface: CreateTableWriter
      Create a new table from the contents of the data frame.

      The new table's schema, partition layout, properties, and other configuration will be based on the configuration set on this writer.

      If the output table exists, this operation will fail with TableAlreadyExistsException.

      Specified by:
      create in interface CreateTableWriter<T>
    • createOrReplace

      public void createOrReplace()
      Description copied from interface: CreateTableWriter
      Create a new table or replace an existing table with the contents of the data frame.

      The output table's schema, partition layout, properties, and other configuration will be based on the contents of the data frame and the configuration set on this writer. If the table exists, its configuration and data will be replaced.

      Specified by:
      createOrReplace in interface CreateTableWriter<T>
    • option

      public DataFrameWriterV2<T> option(String key, String value)
      Description copied from interface: WriteConfigMethods
      Add a write option.

      Specified by:
      option in interface WriteConfigMethods<T>
      Parameters:
      key - (undocumented)
      value - (undocumented)
      Returns:
      (undocumented)
    • options

      public DataFrameWriterV2<T> options(scala.collection.Map<String,String> options)
      Description copied from interface: WriteConfigMethods
      Add write options from a Scala Map.

      Specified by:
      options in interface WriteConfigMethods<T>
      Parameters:
      options - (undocumented)
      Returns:
      (undocumented)
    • options

      public DataFrameWriterV2<T> options(Map<String,String> options)
      Description copied from interface: WriteConfigMethods
      Add write options from a Java Map.

      Specified by:
      options in interface WriteConfigMethods<T>
      Parameters:
      options - (undocumented)
      Returns:
      (undocumented)
    • overwrite

      public void overwrite(Column condition) throws org.apache.spark.sql.catalyst.analysis.NoSuchTableException
      Overwrite rows matching the given filter condition with the contents of the data frame in the output table.

      If the output table does not exist, this operation will fail with NoSuchTableException. The data frame will be validated to ensure it is compatible with the existing table.

      Parameters:
      condition - (undocumented)
      Throws:
      org.apache.spark.sql.catalyst.analysis.NoSuchTableException - If the table does not exist
    • overwritePartitions

      public void overwritePartitions() throws org.apache.spark.sql.catalyst.analysis.NoSuchTableException
      Overwrite all partition for which the data frame contains at least one row with the contents of the data frame in the output table.

      This operation is equivalent to Hive's INSERT OVERWRITE ... PARTITION, which replaces partitions dynamically depending on the contents of the data frame.

      If the output table does not exist, this operation will fail with NoSuchTableException. The data frame will be validated to ensure it is compatible with the existing table.

      Throws:
      org.apache.spark.sql.catalyst.analysis.NoSuchTableException - If the table does not exist
    • partitionedBy

      public CreateTableWriter<T> partitionedBy(Column column, Column... columns)
    • partitionedBy

      public CreateTableWriter<T> partitionedBy(Column column, scala.collection.immutable.Seq<Column> columns)
      Description copied from interface: CreateTableWriter
      Partition the output table created by create, createOrReplace, or replace using the given columns or transforms.

      When specified, the table data will be stored by these values for efficient reads.

      For example, when a table is partitioned by day, it may be stored in a directory layout like:

      • table/day=2019-06-01/
      • table/day=2019-06-02/

      Partitioning is one of the most widely used techniques to optimize physical data layout. It provides a coarse-grained index for skipping unnecessary data reads when queries have predicates on the partitioned columns. In order for partitioning to work well, the number of distinct values in each column should typically be less than tens of thousands.

      Specified by:
      partitionedBy in interface CreateTableWriter<T>
      Parameters:
      column - (undocumented)
      columns - (undocumented)
      Returns:
      (undocumented)
    • replace

      public void replace()
      Description copied from interface: CreateTableWriter
      Replace an existing table with the contents of the data frame.

      The existing table's schema, partition layout, properties, and other configuration will be replaced with the contents of the data frame and the configuration set on this writer.

      If the output table does not exist, this operation will fail with CannotReplaceMissingTableException.

      Specified by:
      replace in interface CreateTableWriter<T>
    • tableProperty

      public CreateTableWriter<T> tableProperty(String property, String value)
      Description copied from interface: CreateTableWriter
      Add a table property.
      Specified by:
      tableProperty in interface CreateTableWriter<T>
      Parameters:
      property - (undocumented)
      value - (undocumented)
      Returns:
      (undocumented)
    • using

      public CreateTableWriter<T> using(String provider)
      Description copied from interface: CreateTableWriter
      Specifies a provider for the underlying output data source. Spark's default catalog supports "parquet", "json", etc.

      Specified by:
      using in interface CreateTableWriter<T>
      Parameters:
      provider - (undocumented)
      Returns:
      (undocumented)