Please log in to watch this conference skillscast.
Hadoop and all it's eco system has settled down for good in our hearts and / or minds. It's quite old and has proven to be quite reliable for certain kinds of tasks. Yet one problem still remains - writing Map Reduce jobs in plain Java is really a pain.
The API is clunky and does it's best to hide the actual algorithm beneath tons of boilerplate. Throughout the years many tools and aproaches have shown up - Hadoop's own Streaming API or the great Cascading library.
We'll dive into code examples as well as look into how Scalding actually works, so you can try it out on your cluster when you come back to work on Monday (and smile a bit more when asked to write a Job next time!)
YOU MAY ALSO LIKE:
- Workshop: End-to-end asynchronous back-pressure with Akka Streams (SkillsCast recorded in December 2015)
- Introduction to Knowledge Graphs (In-Person Workshop) with Howard Knowles (in London on 13th June 2022)
- Introduction to Knowledge Graphs (Online Workshop) with Howard Knowles (Online Workshop on 21st July 2022)
- YOW! Data 2022 (Online Conference on 1st - 2nd June 2022)
- Shift to Data as a Product, Leverage In-Business Expertise to Scale Analytics (Online Meetup on 22nd May 2022)
- WebAssembly for Java Developers (Online Meetup on 8th June 2022)
- Getting Geospatial Data on The Web (SkillsCast recorded in February 2022)
- Deep Learning with F#: An Experience Report (SkillsCast recorded in October 2021)
Scalding A.K.A: Writing Hadoop jobs, but without the pain
Konrad Malawski
Konrad is an Akka hakker at Typesafe, where he also participated in the Reactive Streams initiative, and implemented its Technology Compatibility Kit.