Please log in to watch this conference skillscast.
Morgan Stanley has developed a technology that allows the widespread automatic parallelisation of execution. James and Gjeta are part of a team who, using the Scala language and ecosystem, have built a runtime that automatically parallelises users’ Scala code. During this talk, you will discover how code can be written with no concern for parallelism and automatically executed on multiple cores. Similar in aims to Twitter’s Stitch framework, this approach attempts to achieve the same goals without use of for-comprehensions. You will learn how calls to storage and other expensive operations can be automatically grouped by the technology. You will explore how it was built using Scala, see live demonstrations, and hear a discussion of the theory behind the practice. The system is battle-tested in a project comprising of hundreds of devs and millions of lines of Scala code. The technology has been in development for six years and this will represent the first public discussion of it. This talk will cover purely a description of the technology that has been built.
YOU MAY ALSO LIKE:
Automatic Parallelisation and Batching of Scala Code
Gjeta is a project engineer working for the last year in the core technology of the project. She has been profiling applications to identify reasons for cache misses and concurrency bottlenecks. She has also been tracking and analysing efficiencies across the codebase and including caching of bitemporal database queries for optimal reuse.
James has been working on this Scala-based project for six years and is a Scala Center member. Part of the sponsorship process for the initial work on Slick and ongoing work on performance of the core Scala compiler. He has had numerous other engagements with Scala.