Snowflake to Delta Lake integration
Experimental
This is an experimental integration. Use with caution.
A Delta table can be read by Snowflake using a manifest file, which is a text file containing the list of data files to read for querying a Delta table. This article describes how to set up a Snowflake to Delta Lake integration using manifest files and query Delta tables.
Set up a Snowflake to Delta Lake integration and query Delta tables
You set up a Snowflake to Delta Lake integration using the following steps.
Step 1: Generate manifests of a Delta table using Apache Spark
Using Spark configured with Delta Lake, run any of the following commands on a Delta table at location pathToDeltaTable
:
SQL
GENERATE symlink_format_manifest FOR TABLE delta.`pathToDeltaTable`
See Enable SQL commands within Apache Spark for the steps to enable support for SQL commands in Apache Spark.
Scala
val deltaTable = DeltaTable.forPath(pathToDeltaTable)
deltaTable.generate("symlink_format_manifest")
Java
DeltaTable deltaTable = DeltaTable.forPath(pathToDeltaTable);
deltaTable.generate("symlink_format_manifest");
Python
deltaTable = DeltaTable.forPath(pathToDeltaTable)
deltaTable.generate("symlink_format_manifest")
The generate
command generates manifest files at pathToDeltaTable/_symlink_format_manifest/
. In other words, the files in this directory contain the names of the data files (that is, Parquet files) that should be read for reading a snapshot of the Delta table.
Note
We recommend that you define the Delta table in a location that Snowflake can read directly.
Step 2: Configure Snowflake to read the generated manifests
Run the following commands in your Snowflake environment.
Define an external table on the manifest files
To define an external table in Snowflake, you must first define a external stage that points to the Delta table. In Snowflake, run the following.
create or replace stage my_staged_table url='<pathToDeltaTable>'
Replace <pathToDeltaTable>
with the full path to the Delta table. Using this stage, you can define a table that reads the file names specified in the manifest files as follows:
CREATE OR REPLACE EXTERNAL TABLE my_delta_manifest_table(
filename VARCHAR AS split_part(VALUE:c1, '/', -1)
)
WITH LOCATION = @my_staged_table/_symlink_format_manifest/
FILE_FORMAT = (TYPE = CSV)
PATTERN = '.*[/]manifest'
AUTO_REFRESH = true;
Note
In this query:
- The location is the manifest subdirectory.
- The
filename
column contains the name of the files (not the full path) defined in the manifest.
Define an external table on Parquet files
You can define a table that reads all the Parquet files in the Delta table.
CREATE OR REPLACE EXTERNAL TABLE my_parquet_data_table(
id INT AS (VALUE:id::INT),
part INT AS (VALUE:part::INT),
...
parquet_filename VARCHAR AS split_part(metadata$filename, '/', -1)
)
WITH LOCATION = @my_staged_table/
FILE_FORMAT = (TYPE = PARQUET)
PATTERN = '.*[/]part-[^/]*[.]parquet'
AUTO_REFRESH = true;
Note
In this query:
- The location is the Delta table path.
- The
parquet_filename
column contains the name of the file that contains each row of the table.
If your Delta table is partitioned, then you will have to explicitly extract the partition values in the table definition. For example, if the table was partitioned by a single integer column named part
, you can extract the values as follows:
CREATE OR REPLACE EXTERNAL TABLE my_parquet_data_partitioned_table(
id INT AS (VALUE:id::INT),
part INT AS (
nullif(
regexp_replace (metadata$filename, '.*part\\=(.*)\\/.*', '\\1'),
'__HIVE_DEFAULT_PARTITION__'
)::INT
),
...
parquet_filename VARCHAR AS split_part(metadata$filename, '/', -1),
)
WITH LOCATION = @my_staged_partitioned_table/
FILE_FORMAT = (TYPE = PARQUET)
PATTERN = '.*[/]part-[^/]*[.]parquet'
AUTO_REFRESH = true;
The regular expression is used to extract the partition value for the column part
.
Querying the Delta table as this Parquet table will produce incorrect results because this query will read all the Parquet files in this table rather than only those that define a consistent snapshot of the table. You can use the manifest table to get a consistent snapshot data.
Define view to get correct contents of the Delta table using the manifest table
To read only the rows belonging to the consistent snapshot defined in the generated manifests, you can apply a filter to keep only the rows in the Parquet table that came from the files defined in the manifest table.
CREATE OR REPLACE VIEW my_delta_table AS
SELECT id, part, ...
FROM my_parquet_data_table
WHERE parquet_filename IN (SELECT filename FROM my_delta_manifest_table);
Querying this view will provide you with a consistent view of the Delta table.
Limitations
The Snowflake integration has known limitations in its behavior.
Data consistency
Whenever Delta Lake generates updated manifests, it atomically overwrites existing manifest files. Therefore, Snowflake will always see a consistent view of the data files; it will see all of the old version files or all of the new version files. However, the granularity of the consistency guarantees depends on whether the table is partitioned or not.
- Unpartitioned tables: All the files names are written in one manifest file which is updated atomically. In this case Snowflake will see full table snapshot consistency.
- Partitioned tables: A manifest file is partitioned in the same Hive-partitioning-style directory structure as the original Delta table. This means that each partition is updated atomically, and Snowflake will see a consistent view of each partition but not a consistent view across partitions. Furthermore, since all manifests of all partitions cannot be updated together, concurrent attempts to generate manifests can lead to different partitions having manifests of different versions.
Depending on what storage system you are using for Delta tables, it is possible to get incorrect results when Snowflake concurrently queries the manifest while the manifest files are being rewritten. In file system implementations that lack atomic file overwrites, a manifest file may be momentarily unavailable. Hence, use manifests with caution if their updates are likely to coincide with queries from Snowflake.
Performance
This is an experimental integration and its performance and scalability characteristics have not yet been tested.
Schema evolution
Delta Lake supports schema evolution and queries on a Delta table automatically use the latest schema regardless of the schema defined in the table in the Hive metastore. However, Snowflake uses the schema defined in its table definition, and will not query with the updated schema until the table definition is updated to the new schema.