Redshift Spectrum to Delta Lake integration

Experimental

This is an experimental integration. Use with caution.

A Delta table can be read by Redshift Spectrum using a manifest file, which is a text file containing the list of data files to read for querying a Delta table. This article describes how to set up a Redshift Spectrum to Delta Lake integration using manifest files and query Delta tables.

Set up a Redshift Spectrum to Delta Lake integration and query Delta tables

You set up a Redshift Spectrum to Delta Lake integration using the following steps.

Step 1: Generate manifests of a Delta table using Apache Spark

Using Spark configured with Delta Lake, run any of the following commands on a Delta table at location pathToDeltaTable:

SQL

GENERATE symlink_format_manifest FOR TABLE delta.`pathToDeltaTable`

See Enable SQL commands within Apache Spark for the steps to enable support for SQL commands in Apache Spark.

Scala

val deltaTable = DeltaTable.forPath(pathToDeltaTable)
deltaTable.generate("symlink_format_manifest")

Java

DeltaTable deltaTable = DeltaTable.forPath(pathToDeltaTable);
deltaTable.generate("symlink_format_manifest");

Python

deltaTable = DeltaTable.forPath(pathToDeltaTable)
deltaTable.generate("symlink_format_manifest")

The generate command generates manifest files at pathToDeltaTable/_symlink_format_manifest/. In other words, the files in this directory contain the names of the data files (that is, Parquet files) that should be read for reading a snapshot of the Delta table.

Note

We recommend that you define the Delta table in a location that Redshift Spectrum can read directly.

Step 2: Configure Redshift Spectrum to read the generated manifests

Run the following commands in your Redshift Spectrum environment.

Define an external table on the manifest files

You define a table that reads the file names specified in the manifest files as follows:

CREATE EXTERNAL TABLE spectrum.my_delta_manifest_table(filepath VARCHAR)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
STORED AS TEXTFILE
LOCATION '<pathToDeltaTable>/_symlink_format_manifest/';

Replace <pathToDeltaTable> with the full path to the Delta table. The location points to the manifest subdirectory _symlink_format_manifest.

Define an external table on Parquet files

You define a table that reads all the Parquet files in the Delta table.

CREATE EXTERNAL TABLE spectrum.my_parquet_data_table(
    id bigint,
    part bigint,
    ...)
STORED AS PARQUET
LOCATION '<pathToDeltaTable>'

Querying the Delta table as this Parquet table will produce incorrect results because the query will read all the Parquet files in this table rather than only those that define a consistent snapshot of the table. You can use the manifest table to get a consistent snapshot data.

Define view to get correct contents of the Delta table using the manifest table

To read only the rows belonging to the consistent snapshot defined in the generated manifests, you can apply a filter to keep only the rows in the Parquet table that came from the files defined in the manifest table.

SELECT id, part, "$path" as parquet_filename
FROM spectrum.my_parquet_data_table
WHERE parquet_filename IN (SELECT filepath FROM spectrum.my_delta_manifest_table);

The query provides you with a consistent view of the Delta table.

Step 3: Update manifests

When data in a Delta table is updated, you must regenerate the manifests.

Limitations

The Redshift Spectrum integration has known limitations in its behavior.

Data consistency

Whenever Delta Lake generates updated manifests, it atomically overwrites existing manifest files. Therefore, Redshift Spectrum will always see a consistent view of the data files; it will see all of the old version files or all of the new version files. However, the granularity of the consistency guarantees depends on whether the table is partitioned or not.

  • Unpartitioned tables: All the files names are written in one manifest file which is updated atomically. In this case Redshift Spectrum will see full table snapshot consistency.
  • Partitioned tables: A manifest file is partitioned in the same Hive-partitioning-style directory structure as the original Delta table. This means that each partition is updated atomically, and Redshift Spectrum will see a consistent view of each partition but not a consistent view across partitions. Furthermore, since all manifests of all partitions cannot be updated together, concurrent attempts to generate manifests can lead to different partitions having manifests of different versions.

Depending on what storage system you are using for Delta tables, it is possible to get incorrect results when Redshift Spectrum concurrently queries the manifest while the manifest files are being rewritten. In file system implementations that lack atomic file overwrites, a manifest file may be momentarily unavailable. Hence, use manifests with caution if their updates are likely to coincide with queries from Redshift Spectrum.

Performance

This is an experimental integration and its performance and scalability characteristics have not yet been tested.

Schema evolution

Delta Lake supports schema evolution and queries on a Delta table automatically use the latest schema regardless of the schema defined in the table in the Hive metastore. However, Redshift Spectrum uses the schema defined in its table definition, and will not query with the updated schema until the table definition is updated to the new schema.