Migrate workloads to Delta Lake

When you migrate workloads to Delta Lake, you should be aware of the following simplifications and differences compared with the data sources provided by Apache Spark and Apache Hive.

Delta Lake handles the following operations automatically, which you should never perform manually:

Load a single partition

As an optimization, you may sometimes directly load the partition of data you are interested in. For example, spark.read.parquet("/data/date=2017-01-01"). This is unnecessary with Delta Lake, since it can quickly read the list of files from the transaction log to find the relevant ones. If you are interested in a single partition, specify it using a WHERE clause. For example, spark.read.delta("/data").where("date = '2017-01-01'"). For large tables with many files in the partition, this can be much faster than loading a single partition (with direct partition path, or with WHERE) from a Parquet table because listing the files in the directory is often slower than reading the list of files from the the transaction log.

When you port an existing application to Delta Lake, you should avoid the following operations, which bypass the transaction log:

Manually modify data

Delta Lake uses the transaction log to atomically commit changes to the table. Because the log is the source of truth, files that are written out but not added to the transaction log are not read by Spark. Similarly, even if you manually delete a file, a pointer to the file is still present in the transaction log.

Instead of manually modifying files stored in a Delta table, always use the commands that are described in this guide.

External readers

The data stored in Delta Lake is encoded as Parquet files. However, accessing these files using an external reader is not safe. You’ll see duplicates and uncommitted data and the read may fail when someone runs Vacuum.

Note

Because the files are encoded in an open format, you always have the option to move the files outside Delta Lake. At that point, you can run VACUUM RETAIN 0 and delete the transaction log. This leaves the table’s files in a consistent state that can be read by the external reader of your choice.