public class DeltaTable
extends Object
implements io.delta.tables.execution.DeltaTableOperations, scala.Serializable
DeltaTable.forPath(sparkSession, pathToTheDeltaTable)
Modifier and Type | Method and Description |
---|---|
DeltaTable |
alias(String alias)
Apply an alias to the DeltaTable.
|
DeltaTable |
as(String alias)
Apply an alias to the DeltaTable.
|
static DeltaColumnBuilder |
columnBuilder(org.apache.spark.sql.SparkSession spark,
String colName)
:: Evolving ::
|
static DeltaColumnBuilder |
columnBuilder(String colName)
:: Evolving ::
|
static DeltaTable |
convertToDelta(org.apache.spark.sql.SparkSession spark,
String identifier)
Create a DeltaTable from the given parquet table.
|
static DeltaTable |
convertToDelta(org.apache.spark.sql.SparkSession spark,
String identifier,
String partitionSchema)
Create a DeltaTable from the given parquet table and partition schema.
|
static DeltaTable |
convertToDelta(org.apache.spark.sql.SparkSession spark,
String identifier,
org.apache.spark.sql.types.StructType partitionSchema)
Create a DeltaTable from the given parquet table and partition schema.
|
static DeltaTableBuilder |
create()
:: Evolving ::
|
static DeltaTableBuilder |
create(org.apache.spark.sql.SparkSession spark)
:: Evolving ::
|
static DeltaTableBuilder |
createIfNotExists()
:: Evolving ::
|
static DeltaTableBuilder |
createIfNotExists(org.apache.spark.sql.SparkSession spark)
:: Evolving ::
|
static DeltaTableBuilder |
createOrReplace()
:: Evolving ::
|
static DeltaTableBuilder |
createOrReplace(org.apache.spark.sql.SparkSession spark)
:: Evolving ::
|
void |
delete()
Delete data from the table.
|
void |
delete(org.apache.spark.sql.Column condition)
Delete data from the table that match the given
condition . |
void |
delete(String condition)
Delete data from the table that match the given
condition . |
static DeltaTable |
forName(org.apache.spark.sql.SparkSession sparkSession,
String tableName)
Create a DeltaTable using the given table or view name using the given SparkSession.
|
static DeltaTable |
forName(String tableOrViewName)
Create a DeltaTable using the given table or view name using the given SparkSession.
|
static DeltaTable |
forPath(org.apache.spark.sql.SparkSession sparkSession,
String path)
Create a DeltaTable for the data at the given
path using the given SparkSession. |
static DeltaTable |
forPath(String path)
Create a DeltaTable for the data at the given
path . |
void |
generate(String mode)
Generate a manifest for the given Delta Table
|
org.apache.spark.sql.Dataset<org.apache.spark.sql.Row> |
history()
Get the information available commits on this table as a Spark DataFrame.
|
org.apache.spark.sql.Dataset<org.apache.spark.sql.Row> |
history(int limit)
Get the information of the latest
limit commits on this table as a Spark DataFrame. |
static boolean |
isDeltaTable(org.apache.spark.sql.SparkSession sparkSession,
String identifier)
Check if the provided
identifier string, in this case a file path,
is the root of a Delta table using the given SparkSession. |
static boolean |
isDeltaTable(String identifier)
Check if the provided
identifier string, in this case a file path,
is the root of a Delta table. |
DeltaMergeBuilder |
merge(org.apache.spark.sql.Dataset<org.apache.spark.sql.Row> source,
org.apache.spark.sql.Column condition)
Merge data from the
source DataFrame based on the given merge condition . |
DeltaMergeBuilder |
merge(org.apache.spark.sql.Dataset<org.apache.spark.sql.Row> source,
String condition)
Merge data from the
source DataFrame based on the given merge condition . |
static DeltaTableBuilder |
replace()
:: Evolving ::
|
static DeltaTableBuilder |
replace(org.apache.spark.sql.SparkSession spark)
:: Evolving ::
|
org.apache.spark.sql.Dataset<org.apache.spark.sql.Row> |
toDF()
Get a DataFrame (that is, Dataset[Row]) representation of this Delta table.
|
void |
update(org.apache.spark.sql.Column condition,
scala.collection.immutable.Map<String,org.apache.spark.sql.Column> set)
Update data from the table on the rows that match the given
condition
based on the rules defined by set . |
void |
update(org.apache.spark.sql.Column condition,
java.util.Map<String,org.apache.spark.sql.Column> set)
Update data from the table on the rows that match the given
condition
based on the rules defined by set . |
void |
update(scala.collection.immutable.Map<String,org.apache.spark.sql.Column> set)
Update rows in the table based on the rules defined by
set . |
void |
update(java.util.Map<String,org.apache.spark.sql.Column> set)
Update rows in the table based on the rules defined by
set . |
void |
updateExpr(scala.collection.immutable.Map<String,String> set)
Update rows in the table based on the rules defined by
set . |
void |
updateExpr(java.util.Map<String,String> set)
Update rows in the table based on the rules defined by
set . |
void |
updateExpr(String condition,
scala.collection.immutable.Map<String,String> set)
Update data from the table on the rows that match the given
condition ,
which performs the rules defined by set . |
void |
updateExpr(String condition,
java.util.Map<String,String> set)
Update data from the table on the rows that match the given
condition ,
which performs the rules defined by set . |
void |
upgradeTableProtocol(int readerVersion,
int writerVersion)
Updates the protocol version of the table to leverage new features.
|
org.apache.spark.sql.Dataset<org.apache.spark.sql.Row> |
vacuum()
Recursively delete files and directories in the table that are not needed by the table for
maintaining older versions up to the given retention threshold.
|
org.apache.spark.sql.Dataset<org.apache.spark.sql.Row> |
vacuum(double retentionHours)
Recursively delete files and directories in the table that are not needed by the table for
maintaining older versions up to the given retention threshold.
|
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public static DeltaTable convertToDelta(org.apache.spark.sql.SparkSession spark, String identifier, org.apache.spark.sql.types.StructType partitionSchema)
Note: Any changes to the table during the conversion process may not result in a consistent state at the end of the conversion. Users should stop any changes to the table before the conversion is started.
An example usage would be
io.delta.tables.DeltaTable.convertToDelta(
spark,
"parquet.`/path`",
new StructType().add(StructField("key1", LongType)).add(StructField("key2", StringType)))
spark
- (undocumented)identifier
- (undocumented)partitionSchema
- (undocumented)public static DeltaTable convertToDelta(org.apache.spark.sql.SparkSession spark, String identifier, String partitionSchema)
Note: Any changes to the table during the conversion process may not result in a consistent state at the end of the conversion. Users should stop any changes to the table before the conversion is started.
An example usage would be
io.delta.tables.DeltaTable.convertToDelta(
spark,
"parquet.`/path`",
"key1 long, key2 string")
spark
- (undocumented)identifier
- (undocumented)partitionSchema
- (undocumented)public static DeltaTable convertToDelta(org.apache.spark.sql.SparkSession spark, String identifier)
Note: Any changes to the table during the conversion process may not result in a consistent state at the end of the conversion. Users should stop any changes to the table before the conversion is started.
An Example would be
io.delta.tables.DeltaTable.convertToDelta(
spark,
"parquet.`/path`"
spark
- (undocumented)identifier
- (undocumented)public static DeltaTable forPath(String path)
path
.
Note: This uses the active SparkSession in the current thread to read the table data. Hence,
this throws error if active SparkSession has not been set, that is,
SparkSession.getActiveSession()
is empty.
path
- (undocumented)public static DeltaTable forPath(org.apache.spark.sql.SparkSession sparkSession, String path)
path
using the given SparkSession.
sparkSession
- (undocumented)path
- (undocumented)public static DeltaTable forName(String tableOrViewName)
Note: This uses the active SparkSession in the current thread to read the table data. Hence,
this throws error if active SparkSession has not been set, that is,
SparkSession.getActiveSession()
is empty.
tableOrViewName
- (undocumented)public static DeltaTable forName(org.apache.spark.sql.SparkSession sparkSession, String tableName)
sparkSession
- (undocumented)tableName
- (undocumented)public static boolean isDeltaTable(org.apache.spark.sql.SparkSession sparkSession, String identifier)
identifier
string, in this case a file path,
is the root of a Delta table using the given SparkSession.
An example would be
DeltaTable.isDeltaTable(spark, "path/to/table")
sparkSession
- (undocumented)identifier
- (undocumented)public static boolean isDeltaTable(String identifier)
identifier
string, in this case a file path,
is the root of a Delta table.
Note: This uses the active SparkSession in the current thread to search for the table. Hence,
this throws error if active SparkSession has not been set, that is,
SparkSession.getActiveSession()
is empty.
An example would be
DeltaTable.isDeltaTable(spark, "/path/to/table")
identifier
- (undocumented)public static DeltaTableBuilder create()
Return an instance of DeltaTableBuilder
to create a Delta table,
error if the table exists (the same as SQL CREATE TABLE
).
Refer to DeltaTableBuilder
for more details.
Note: This uses the active SparkSession in the current thread to read the table data. Hence,
this throws error if active SparkSession has not been set, that is,
SparkSession.getActiveSession()
is empty.
public static DeltaTableBuilder create(org.apache.spark.sql.SparkSession spark)
Return an instance of DeltaTableBuilder
to create a Delta table,
error if the table exists (the same as SQL CREATE TABLE
).
Refer to DeltaTableBuilder
for more details.
spark
- sparkSession sparkSession passed by the userpublic static DeltaTableBuilder createIfNotExists()
Return an instance of DeltaTableBuilder
to create a Delta table,
if it does not exists (the same as SQL CREATE TABLE IF NOT EXISTS
).
Refer to DeltaTableBuilder
for more details.
Note: This uses the active SparkSession in the current thread to read the table data. Hence,
this throws error if active SparkSession has not been set, that is,
SparkSession.getActiveSession()
is empty.
public static DeltaTableBuilder createIfNotExists(org.apache.spark.sql.SparkSession spark)
Return an instance of DeltaTableBuilder
to create a Delta table,
if it does not exists (the same as SQL CREATE TABLE IF NOT EXISTS
).
Refer to DeltaTableBuilder
for more details.
spark
- sparkSession sparkSession passed by the userpublic static DeltaTableBuilder replace()
Return an instance of DeltaTableBuilder
to replace a Delta table,
error if the table doesn't exist (the same as SQL REPLACE TABLE
)
Refer to DeltaTableBuilder
for more details.
Note: This uses the active SparkSession in the current thread to read the table data. Hence,
this throws error if active SparkSession has not been set, that is,
SparkSession.getActiveSession()
is empty.
public static DeltaTableBuilder replace(org.apache.spark.sql.SparkSession spark)
Return an instance of DeltaTableBuilder
to replace a Delta table,
error if the table doesn't exist (the same as SQL REPLACE TABLE
)
Refer to DeltaTableBuilder
for more details.
spark
- sparkSession sparkSession passed by the userpublic static DeltaTableBuilder createOrReplace()
Return an instance of DeltaTableBuilder
to replace a Delta table
or create table if not exists (the same as SQL CREATE OR REPLACE TABLE
)
Refer to DeltaTableBuilder
for more details.
Note: This uses the active SparkSession in the current thread to read the table data. Hence,
this throws error if active SparkSession has not been set, that is,
SparkSession.getActiveSession()
is empty.
public static DeltaTableBuilder createOrReplace(org.apache.spark.sql.SparkSession spark)
Return an instance of DeltaTableBuilder
to replace a Delta table,
or create table if not exists (the same as SQL CREATE OR REPLACE TABLE
)
Refer to DeltaTableBuilder
for more details.
spark
- sparkSession sparkSession passed by the user.public static DeltaColumnBuilder columnBuilder(String colName)
Return an instance of DeltaColumnBuilder
to specify a column.
Refer to DeltaTableBuilder
for examples and DeltaColumnBuilder
detailed APIs.
Note: This uses the active SparkSession in the current thread to read the table data. Hence,
this throws error if active SparkSession has not been set, that is,
SparkSession.getActiveSession()
is empty.
colName
- string the column namepublic static DeltaColumnBuilder columnBuilder(org.apache.spark.sql.SparkSession spark, String colName)
Return an instance of DeltaColumnBuilder
to specify a column.
Refer to DeltaTableBuilder
for examples and DeltaColumnBuilder
detailed APIs.
spark
- sparkSession sparkSession passed by the usercolName
- string the column namepublic DeltaTable as(String alias)
Dataset.as(alias)
or
SQL tableName AS alias
.
alias
- (undocumented)public DeltaTable alias(String alias)
Dataset.as(alias)
or
SQL tableName AS alias
.
alias
- (undocumented)public org.apache.spark.sql.Dataset<org.apache.spark.sql.Row> toDF()
public org.apache.spark.sql.Dataset<org.apache.spark.sql.Row> vacuum(double retentionHours)
retentionHours
- The retention threshold in hours. Files required by the table for
reading versions earlier than this will be preserved and the
rest of them will be deleted.public org.apache.spark.sql.Dataset<org.apache.spark.sql.Row> vacuum()
note: This will use the default retention period of 7 days.
public org.apache.spark.sql.Dataset<org.apache.spark.sql.Row> history(int limit)
limit
commits on this table as a Spark DataFrame.
The information is in reverse chronological order.
limit
- The number of previous commands to get history for
public org.apache.spark.sql.Dataset<org.apache.spark.sql.Row> history()
public void generate(String mode)
mode
- Specifies the mode for the generation of the manifest.
The valid modes are as follows (not case sensitive):
- "symlink_format_manifest" : This will generate manifests in symlink format
for Presto and Athena read support.
See the online documentation for more information.public void delete(String condition)
condition
.
condition
- Boolean SQL expression
public void delete(org.apache.spark.sql.Column condition)
condition
.
condition
- Boolean SQL expression
public void delete()
public void update(scala.collection.immutable.Map<String,org.apache.spark.sql.Column> set)
set
.
Scala example to increment the column data
.
import org.apache.spark.sql.functions._
deltaTable.update(Map("data" -> col("data") + 1))
set
- rules to update a row as a Scala map between target column names and
corresponding update expressions as Column objects.public void update(java.util.Map<String,org.apache.spark.sql.Column> set)
set
.
Java example to increment the column data
.
import org.apache.spark.sql.Column;
import org.apache.spark.sql.functions;
deltaTable.update(
new HashMap<String, Column>() {{
put("data", functions.col("data").plus(1));
}}
);
set
- rules to update a row as a Java map between target column names and
corresponding update expressions as Column objects.public void update(org.apache.spark.sql.Column condition, scala.collection.immutable.Map<String,org.apache.spark.sql.Column> set)
condition
based on the rules defined by set
.
Scala example to increment the column data
.
import org.apache.spark.sql.functions._
deltaTable.update(
col("date") > "2018-01-01",
Map("data" -> col("data") + 1))
condition
- boolean expression as Column object specifying which rows to update.set
- rules to update a row as a Scala map between target column names and
corresponding update expressions as Column objects.public void update(org.apache.spark.sql.Column condition, java.util.Map<String,org.apache.spark.sql.Column> set)
condition
based on the rules defined by set
.
Java example to increment the column data
.
import org.apache.spark.sql.Column;
import org.apache.spark.sql.functions;
deltaTable.update(
functions.col("date").gt("2018-01-01"),
new HashMap<String, Column>() {{
put("data", functions.col("data").plus(1));
}}
);
condition
- boolean expression as Column object specifying which rows to update.set
- rules to update a row as a Java map between target column names and
corresponding update expressions as Column objects.public void updateExpr(scala.collection.immutable.Map<String,String> set)
set
.
Scala example to increment the column data
.
deltaTable.updateExpr(Map("data" -> "data + 1")))
set
- rules to update a row as a Scala map between target column names and
corresponding update expressions as SQL formatted strings.public void updateExpr(java.util.Map<String,String> set)
set
.
Java example to increment the column data
.
deltaTable.updateExpr(
new HashMap<String, String>() {{
put("data", "data + 1");
}}
);
set
- rules to update a row as a Java map between target column names and
corresponding update expressions as SQL formatted strings.public void updateExpr(String condition, scala.collection.immutable.Map<String,String> set)
condition
,
which performs the rules defined by set
.
Scala example to increment the column data
.
deltaTable.update(
"date > '2018-01-01'",
Map("data" -> "data + 1"))
condition
- boolean expression as SQL formatted string object specifying
which rows to update.set
- rules to update a row as a Scala map between target column names and
corresponding update expressions as SQL formatted strings.public void updateExpr(String condition, java.util.Map<String,String> set)
condition
,
which performs the rules defined by set
.
Java example to increment the column data
.
deltaTable.update(
"date > '2018-01-01'",
new HashMap<String, String>() {{
put("data", "data + 1");
}}
);
condition
- boolean expression as SQL formatted string object specifying
which rows to update.set
- rules to update a row as a Java map between target column names and
corresponding update expressions as SQL formatted strings.public DeltaMergeBuilder merge(org.apache.spark.sql.Dataset<org.apache.spark.sql.Row> source, String condition)
source
DataFrame based on the given merge condition
. This returns
a DeltaMergeBuilder
object that can be used to specify the update, delete, or insert
actions to be performed on rows based on whether the rows matched the condition or not.
See the DeltaMergeBuilder
for a full description of this operation and what combinations of
update, delete and insert operations are allowed.
Scala example to update a key-value Delta table with new key-values from a source DataFrame:
deltaTable
.as("target")
.merge(
source.as("source"),
"target.key = source.key")
.whenMatched
.updateExpr(Map(
"value" -> "source.value"))
.whenNotMatched
.insertExpr(Map(
"key" -> "source.key",
"value" -> "source.value"))
.execute()
Java example to update a key-value Delta table with new key-values from a source DataFrame:
deltaTable
.as("target")
.merge(
source.as("source"),
"target.key = source.key")
.whenMatched
.updateExpr(
new HashMap<String, String>() {{
put("value" -> "source.value");
}})
.whenNotMatched
.insertExpr(
new HashMap<String, String>() {{
put("key", "source.key");
put("value", "source.value");
}})
.execute();
source
- source Dataframe to be merged.condition
- boolean expression as SQL formatted stringpublic DeltaMergeBuilder merge(org.apache.spark.sql.Dataset<org.apache.spark.sql.Row> source, org.apache.spark.sql.Column condition)
source
DataFrame based on the given merge condition
. This returns
a DeltaMergeBuilder
object that can be used to specify the update, delete, or insert
actions to be performed on rows based on whether the rows matched the condition or not.
See the DeltaMergeBuilder
for a full description of this operation and what combinations of
update, delete and insert operations are allowed.
Scala example to update a key-value Delta table with new key-values from a source DataFrame:
deltaTable
.as("target")
.merge(
source.as("source"),
"target.key = source.key")
.whenMatched
.updateExpr(Map(
"value" -> "source.value"))
.whenNotMatched
.insertExpr(Map(
"key" -> "source.key",
"value" -> "source.value"))
.execute()
Java example to update a key-value Delta table with new key-values from a source DataFrame:
deltaTable
.as("target")
.merge(
source.as("source"),
"target.key = source.key")
.whenMatched
.updateExpr(
new HashMap<String, String>() {{
put("value" -> "source.value")
}})
.whenNotMatched
.insertExpr(
new HashMap<String, String>() {{
put("key", "source.key");
put("value", "source.value");
}})
.execute()
source
- source Dataframe to be merged.condition
- boolean expression as a Column objectpublic void upgradeTableProtocol(int readerVersion, int writerVersion)
See online documentation and Delta's protocol specification at PROTOCOL.md for more details.
readerVersion
- (undocumented)writerVersion
- (undocumented)