Frank Munz solves large-scale data and AI challenges at Databricks. He authored three computer science books, built up technical evangelism for Amazon Web Services in Germany, Austria, and Switzerland, and once upon a time worked as a data scientist with a group that won a Nobel prize.
Frank has presented at top-notch conferences on every continent (except Antarctica, due to its inhospitable climate). His speaking engagements include Devoxx, Kubecon, and Java One. He is also a 3x published author.
He is renowned for his world-class demos, which often showcase innovative and interactive applications of technology.
He holds a Ph.D. with summa cum laude in Computer Science from TU Munich where he worked on supercomputing and brain research.
Databricks is the original creator of OSS projects such as Apache Spark, MLFlow, Delta.io, Delta Sharing and has recently open-sourced Unity Catalog for data governance. Building on this foundation of open-source innovation, we invite you to join our comprehensive data engineering workshop on the Databricks Lakehouse.
This workshop caters to data engineers seeking hands-on experience and data architects looking to deepen their knowledge.
The workshop is structured to provide a solid understanding of the following fundamental data engineering and streaming concepts:
- Introduction to the Lakehouse Platform
- What is Data Intelligence?
- Getting started with Delta Live Tables (DLT) for data pipelines in SQL and Python
- Creating data pipelines using DLT with streaming Tables, and Materialized Views
- Mastering Databricks Workflows with advanced control flow and triggers
- Generative AI tooling for Data Engineers: Databricks Assistant and Genie
- Understand data governance and lineage with Unity Catalog
- Serverless Compute for Data Engineers
- Using AI models as a data engineer
- AI playground and agents overview
We believe you only become an expert if you work on real problems and gain hands-on experience.
Therefore, we will equip you with your own Databricks lab environment in this workshop and guide you through practical exercises like using GitHub as a data engineer, ingesting data from various sources, creating batch & streaming data pipelines, orchestration, AI tooling and much much more.
Searching for speaker images...