Redshift Spectrum to Delta Lake integration

Experimental

This is an experimental integration. Use with caution.

A Delta table can be read by Redshift Spectrum using a manifest file, which is a text file containing the list of data files to read for querying a Delta table. This article describes how to set up a Redshift Spectrum to Delta Lake integration using manifest files and query Delta tables.

Set up a Redshift Spectrum to Delta Lake integration and query Delta tables

You set up a Redshift Spectrum to Delta Lake integration using the following steps.

Step 1: Generate manifests of a Delta table using Apache Spark

Run the generate operation on a Delta table at location pathToDeltaTable:

GENERATE symlink_format_manifest FOR TABLE delta.`pathToDeltaTable`
val deltaTable = DeltaTable.forPath(pathToDeltaTable)
deltaTable.generate("symlink_format_manifest")
DeltaTable deltaTable = DeltaTable.forPath(pathToDeltaTable);
deltaTable.generate("symlink_format_manifest");
deltaTable = DeltaTable.forPath(pathToDeltaTable)
deltaTable.generate("symlink_format_manifest")

See Generate for details.

The generate operation generates manifest files at pathToDeltaTable/_symlink_format_manifest/. In other words, the files in this directory contain the names of the data files (that is, Parquet files) that should be read for reading a snapshot of the Delta table.

Note

We recommend that you define the Delta table in a location that Redshift Spectrum can read directly.

Step 2: Configure Redshift Spectrum to read the generated manifests

Run the following commands in your Redshift Spectrum environment.

Define an external table on the manifest files

You define a table that reads the file names specified in the manifest files as follows:

CREATE EXTERNAL TABLE spectrum.my_delta_manifest_table(filepath VARCHAR)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
STORED AS TEXTFILE
LOCATION '<pathToDeltaTable>/_symlink_format_manifest/';

Replace <pathToDeltaTable> with the full path to the Delta table. The location points to the manifest subdirectory _symlink_format_manifest.

Define an external table on Parquet files

You define a table that reads all the Parquet files in the Delta table.

CREATE EXTERNAL TABLE spectrum.my_parquet_data_table(
    id bigint,
    part bigint,
    ...)
STORED AS PARQUET
LOCATION '<pathToDeltaTable>'

Querying the Delta table as this Parquet table will produce incorrect results because the query will read all the Parquet files in this table rather than only those that define a consistent snapshot of the table. You can use the manifest table to get a consistent snapshot data.

Define view to get correct contents of the Delta table using the manifest table

To read only the rows belonging to the consistent snapshot defined in the generated manifests, you can apply a filter to keep only the rows in the Parquet table that came from the files defined in the manifest table.

SELECT id, part, "$path" as parquet_filename
FROM spectrum.my_parquet_data_table
WHERE parquet_filename IN (SELECT filepath FROM spectrum.my_delta_manifest_table);

The query provides you with a consistent view of the Delta table.

Step 3: Update manifests

When data in a Delta table is updated, you must regenerate the manifests using either of the following approaches:

  • Update explicitly: After all the data updates, you can run the generate operation to update the manifests.

  • Update automatically: You can configure a Delta table so that all write operations on the table automatically update the manifests. To enable this automatic mode, set the corresponding table property using the following SQL command.

    ALTER TABLE delta.`pathToDeltaTable` SET TBLPROPERTIES(delta.compatibility.symlinkFormatManifest.enabled=true)
    

Limitations

The Redshift Spectrum integration has known limitations in its behavior.

Data consistency

Whenever Delta Lake generates updated manifests, it atomically overwrites existing manifest files. Therefore, Redshift Spectrum will always see a consistent view of the data files; it will see all of the old version files or all of the new version files. However, the granularity of the consistency guarantees depends on whether the table is partitioned or not.

  • Unpartitioned tables: All the files names are written in one manifest file which is updated atomically. In this case Redshift Spectrum will see full table snapshot consistency.
  • Partitioned tables: A manifest file is partitioned in the same Hive-partitioning-style directory structure as the original Delta table. This means that each partition is updated atomically, and Redshift Spectrum will see a consistent view of each partition but not a consistent view across partitions. Furthermore, since all manifests of all partitions cannot be updated together, concurrent attempts to generate manifests can lead to different partitions having manifests of different versions.

Depending on what storage system you are using for Delta tables, it is possible to get incorrect results when Redshift Spectrum concurrently queries the manifest while the manifest files are being rewritten. In file system implementations that lack atomic file overwrites, a manifest file may be momentarily unavailable. Hence, use manifests with caution if their updates are likely to coincide with queries from Redshift Spectrum.

Performance

This is an experimental integration and its performance and scalability characteristics have not yet been tested.

Schema evolution

Delta Lake supports schema evolution and queries on a Delta table automatically use the latest schema regardless of the schema defined in the table in the Hive metastore. However, Redshift Spectrum uses the schema defined in its table definition, and will not query with the updated schema until the table definition is updated to the new schema.