Please log in to watch this conference skillscast.
Hadoop and all it's eco system has settled down for good in our hearts and / or minds. It's quite old and has proven to be quite reliable for certain kinds of tasks. Yet one problem still remains - writing Map Reduce jobs in plain Java is really a pain.
The API is clunky and does it's best to hide the actual algorithm beneath tons of boilerplate. Throughout the years many tools and aproaches have shown up - Hadoop's own Streaming API or the great Cascading library.
We'll dive into code examples as well as look into how Scalding actually works, so you can try it out on your cluster when you come back to work on Monday (and smile a bit more when asked to write a Job next time!)
YOU MAY ALSO LIKE:
- Workshop: End-to-end asynchronous back-pressure with Akka Streams (SkillsCast recorded in December 2015)
- Getting API Design Right: Learning from Anti-Patterns — Full‑Day Workshop [SAG Digital 2021] (Online Course on 11th October 2021)
- Akka Streams for Scala | ScalaCon Workshop (Online Course on 25th - 27th October 2021)
- ScalaCon 2021: November Edition (Online Conference on 2nd - 5th November 2021)
- Why An API Story? (SkillsCast recorded in June 2021)
- Building a Runtime Reflection System for Rust (SkillsCast recorded in May 2021)
Scalding A.K.A: Writing Hadoop jobs, but without the pain
Konrad is an Akka hakker at Typesafe, where he also participated in the Reactive Streams initiative, and implemented its Technology Compatibility Kit.