Welcome to the Delta Lake documentation
This is the documentation site for Delta Lake.
- Introduction
- Quickstart
- Table batch reads and writes
- Table streaming reads and writes
- Table deletes, updates, and merges
- Change data feed
- Table utility commands
- Remove files no longer referenced by a Delta table
- Retrieve Delta table history
- Retrieve Delta table details
- Generate a manifest file
- Convert a Parquet table to a Delta table
- Convert an Iceberg table to a Delta table
- Convert a Delta table to a Parquet table
- Restore a Delta table to an earlier state
- Shallow clone a Delta table
- Clone Parquet or Iceberg table to Delta
- Constraints
- Table protocol versioning
- Delta Lake APIs
- Storage configuration
- Concurrency control
- Access Delta tables from external data processing engines
- Migration guide
- Best practices
- Frequently asked questions (FAQ)
- What is Delta Lake?
- How is Delta Lake related to Apache Spark?
- What format does Delta Lake use to store data?
- How can I read and write data with Delta Lake?
- Where does Delta Lake store the data?
- Can I copy my Delta Lake table to another location?
- Can I stream data directly into and from Delta tables?
- Does Delta Lake support writes or reads using the Spark Streaming DStream API?
- When I use Delta Lake, will I be able to port my code to other Spark platforms easily?
- Does Delta Lake support multi-table transactions?
- How can I change the type of a column?
- Releases
- Delta Lake resources
- Optimizations
- Delta table properties reference