Please log in to watch this conference skillscast.
While machine learning has been used for decades, accessibility to these methods is undergoing a radical shift, with the rise of simple interfaces and implementations on distributed systems. In practice it means that more players can afford to take advantage of Machine Learning, and at larger scales. In this talk you will discover some introductory Machine Learning concepts and principles and illustrate them with use cases involving large amounts of data.
Based on simple examples put into a business context, and by using the Spark Notebook and Scala, you will learn how to apply different Machine Learning methods, using Apache Spark as the distributed processing engine.
YOU MAY ALSO LIKE:
- Workshop: Mind blown: Crafting a Distributed Data Science Pipeline using Spark, Cassandra, Akka and the Spark Notebook (SkillsCast recorded in December 2015)
- Leonardo De Marchi's Deep Learning Fundamentals (in London on 22nd - 23rd October 2019)
- Lightbend Akka for Scala - Professional (in London on 11th - 12th November 2019)
- Scala eXchange London 2019 (in London on 12th - 13th December 2019)
- Scalax2gether Community Day 2019 (in London on 14th December 2019)
- Code Kata: Yilin Wei - Optics with Monocle (in London on 22nd October 2019)
- A Guide to the Market Promise of Automagic AI-Enabled Detection and Response (in London on 29th October 2019)
- Abstract Data Types In The Region Of Abysmal Pain, And How To Navigate Them (SkillsCast recorded in September 2019)
- Using Kubeflow Pipelines for building machine learning pipelines (SkillsCast recorded in September 2019)
Distributed Data Science with Scala in a Browser
Xavier started his career as a researcher in Experimental Physics, and also focused on data processing. Further down the road, he took part in projects in finance, genomics, and software development for academic research. During that time, he worked on timeseries, on the prediction of biological molecular structures and interactions, and applied Machine Learning methodologies. He developed solutions to manage and process data distributed across data centres. He founded and now works at Data Fellas, a company dedicated to distributed computing and advanced analytics, leveraging Scala, Spark, and other distributed technologies.