Hlsbekde4iijboodmybx
SkillsCast

Staging reactive data pipelines using Kafka as the backbone

7th November 2016 in London at CodeNode

There are 35 other SkillsCasts available from µCon 2016: The Microservices Conference

Please log in to watch this conference skillscast.

601231241 640x360

The Cake Solutions team builds highly distributed and scalable systems using Kafka as their core data pipeline.

Kafka has become the de facto platform for reliable and scalable distribution of high-volumes of data. However, as a developer, it can be challenging to figure out the best architecture and consumption patterns for interacting with Kafka while delivering quality of service such as high availability and delivery guarantees. It can also be difficult to understand the various streaming patterns and messaging topologies available in Kafka.

In this talk, Jaakko will share with you the patterns the team has successfully employed in production and provide the tools and guidelines for other developers to choose the most appropriate fit for given data processing problem. The key points for the presentation are: patterns for building reactive data pipelines, high availability and message delivery guarantees, clustering of application consumers, topic partition topology, offset commit patterns, performance benchmarks, and custom reactive, asynchronous, non-blocking Kafka driver.

YOU MAY ALSO LIKE:

Thanks to our sponsors

Staging reactive data pipelines using Kafka as the backbone

Jaakko Pallari

I work as a software engineer at Cake Solutions Ltd. I have a passion for robust systems, functional programming, and free and open source software. I started my career working as a web developer, and these days I'm responsible for developing backend solutions for global IoT platforms using Scala and the SMACK (Spark, Mesos, Akka, Cassandra, and Kafka) stack.

SkillsCast

Please log in to watch this conference skillscast.

601231241 640x360

The Cake Solutions team builds highly distributed and scalable systems using Kafka as their core data pipeline.

Kafka has become the de facto platform for reliable and scalable distribution of high-volumes of data. However, as a developer, it can be challenging to figure out the best architecture and consumption patterns for interacting with Kafka while delivering quality of service such as high availability and delivery guarantees. It can also be difficult to understand the various streaming patterns and messaging topologies available in Kafka.

In this talk, Jaakko will share with you the patterns the team has successfully employed in production and provide the tools and guidelines for other developers to choose the most appropriate fit for given data processing problem. The key points for the presentation are: patterns for building reactive data pipelines, high availability and message delivery guarantees, clustering of application consumers, topic partition topology, offset commit patterns, performance benchmarks, and custom reactive, asynchronous, non-blocking Kafka driver.

YOU MAY ALSO LIKE:

Thanks to our sponsors

About the Speaker

Staging reactive data pipelines using Kafka as the backbone

Jaakko Pallari

I work as a software engineer at Cake Solutions Ltd. I have a passion for robust systems, functional programming, and free and open source software. I started my career working as a web developer, and these days I'm responsible for developing backend solutions for global IoT platforms using Scala and the SMACK (Spark, Mesos, Akka, Cassandra, and Kafka) stack.

Photos