Please log in to watch this conference skillscast.
Over the past couple of years, Scala has become a go-to language for building data processing applications, as evidenced by the emerging ecosystem of frameworks and tools including LinkedIn's Kafka, Twitter's Scalding and our own Snowplow project (https://github.com/snowplow/snowplow).
In this talk, Alex will draw on his experiences at Snowplow to explore how to build rock-sold data pipelines in Scala, highlighting a range of techniques including:
- Translating the Unix stdin/out/err pattern to stream processing
- ""Railway oriented"" programming using the Scalaz Validation
- Validating data structures with JSON Schema
- Visualizing event stream processing errors in ElasticSearch
YOU MAY ALSO LIKE:
- LDNUG September 2017 - #ProgNET Special with Richard Campbell (in London on 12th September 2017)
- Progressive .NET 2017 (in London on 13th - 15th September 2017)
- London Unreal Engine Meetup (in London on 20th September 2017)
- Fast Track to F# with Tomas Petricek & Phil Trelford (in London on 16th - 17th October 2017)
Building robust data pipelines in Scala
I'm the co-founder and tech lead at Snowplow Analytics, the open source web and event analytics platform (https://github.com/snowplow/snowplow). Snowplow is almost exclusively written in Scala, using a range of technologies including Scalaz, Scalding and Spray. I spend a lot of time working with distributed systems (historically Hadoop, increasingly Kinesis, Kafka et al) to deliver really scalable event stream processing. I'm also the author of Unified Log Processing from Manning Publications (http://manning.com/dean/).