Please log in to watch this conference skillscast.
Users of Big (and not so Big) Data roughly divide into three groups, developers like us, traditional data analysts, and a hybrid called data scientists. The analysts prefer SQL, SAS, and similar, traditional tools. The scientists (mostly statisticians, really) prefer Python and R, with Julia emerging. The Developers started with Java, but they are being seduced by Scala, because it offers ideal tools for data-centric applications.
This talk explains why data-centric applications are driving Scala adoption. Scala already provides these essential features:
- Expressive DSLs.
- The JVM.
- Actors for distributed scaling.
- Optimizations for primitives, but uniform source abstractions.
- Functional combinators.
These features combine to yield powerful, scalable tools with concise APIs:
- Scalding and Summingbird - for Hadoop and Storm
- Spark and H2O - the Next Generation...
- Spire and Algebird - Mathematics
Finally, we'll discuss what's missing and what's ahead.
YOU MAY ALSO LIKE:
- Cluster-wide Scaling of Machine Learning with Ray (SkillsCast recorded in June 2020)
- ScalaCon 2021: November Edition (Online Conference on 2nd - 5th November 2021)
- Haskell eXchange 2021: Novice Track (Online Conference on 15th November 2021)
- Journey to the Centre of the JVM (SkillsCast recorded in May 2021)
- Connecting the dots - building and structuring a functional application in Scala (SkillsCast recorded in May 2021)
Why Scala is Taking Over the Big Data World
Dean Wampler, Ph.D., is the Architect for Big Data Products and Services in the Office of the CTO at Lightbend, where he focuses on the evolving “Fast Data” ecosystem for streaming applications based on the SMACK stack, Spark, Mesos, Akka (and the rest of the Lightbend Reactive Platform), Cassandra, Kafka, and other tools.