Making Apache Spark™ Better with Delta Lake
Databricks Databricks
104K subscribers
169,822 views
0

 Published On Sep 15, 2020

Join Michael Armbrust, head of Delta Lake engineering team, to learn about how his team built upon Apache Spark to bring ACID transactions and other data reliability technologies from the data warehouse world to cloud data lakes.

Apache Spark is the dominant processing framework for big data. Delta Lake adds reliability to Spark so your analytics and machine learning initiatives have ready access to quality, reliable data. This webinar covers the use of Delta Lake to enhance data reliability for Spark environments.

Topics areas include:
- The role of Apache Spark in big data processing
- Use of data lakes as an important part of the data architecture
- Data lake reliability challenges
- How Delta Lake helps provide reliable data for Spark processing
- Specific improvements improvements that Delta Lake adds
- The ease of adopting Delta Lake for powering your data lake

See full Getting Started with Delta Lake tutorial series here:
https://databricks.com/getting-starte...

Get the Delta Lake: Up & Running by O’Reilly ebook preview to learn the basics of Delta Lake, the open storage format at the heart of the lakehouse architecture. Download the ebook: https://dbricks.co/3IIcVCg

show more

Share/Embed