Please log in to watch this conference skillscast.
Over the past couple of years, Scala has become a go-to language for building data processing applications, as evidenced by the emerging ecosystem of frameworks and tools including LinkedIn's Kafka, Twitter's Scalding and our own Snowplow project (https://github.com/snowplow/snowplow).
In this talk, Alex will draw on his experiences at Snowplow to explore how to build rock-sold data pipelines in Scala, highlighting a range of techniques including:
- Translating the Unix stdin/out/err pattern to stream processing
- ""Railway oriented"" programming using the Scalaz Validation
- Validating data structures with JSON Schema
- Visualizing event stream processing errors in ElasticSearch
YOU MAY ALSO LIKE:
- Lightbend Akka for Scala - Professional (in London on 19th - 20th July 2018)
- Lightbend Scala Language - Professional (in London on 17th - 18th September 2018)
- Lightbend Scala Language - Expert (in London on 19th - 21st September 2018)
- ScalaX2gether Community Day 2018 (in London on 15th December 2018)
Building robust data pipelines in Scala
I'm the co-founder and tech lead at Snowplow Analytics, the open source web and event analytics platform (https://github.com/snowplow/snowplow). Snowplow is almost exclusively written in Scala, using a range of technologies including Scalaz, Scalding and Spray. I spend a lot of time working with distributed systems (historically Hadoop, increasingly Kinesis, Kafka et al) to deliver really scalable event stream processing. I'm also the author of Unified Log Processing from Manning Publications (http://manning.com/dean/).