Title: Getting Control of Your Data Pipelines with Kafka

Allen Underwood Microsoft
Allen Underwood

Getting Control of Your Data Pipelines with Kafka
One of the problems that companies face as they store and ingest more and more data is figuring out where the data is, where it needs to be, and a fast and efficient way of getting it from A to B. Apache Kafka helps solve those problems in a number of ways by making Kafka the centralized data pipeline, and leveraging Kafka Connectors as well as Data Streaming for data enrichment. My goal is to show the current state of ETL and what it could be with Kafka, as well as walking through some examples of how it works.
Skill Level
All Skill Levels
kafka big data pipelines etl real time data processing streaming
Is Approved
Back to List