Please log in to watch this conference skillscast.
In all businesses, there is some kind of data pipeline, even if it’s powered by humans working off a shared drive somewhere. Lots of places are better than this - they have workflow systems, ETL pipelines, analytics teams, data scientists, etc - but can they say months later which version of which code is running on what data generated insights? Can they be reproduced? What if the algorithms change, do you go back and re-run everything?
Science itself has a reproducibility problem, but it’s worse in most companies, and mistakes can be expensive.
There is a useful subset of data pipelines, let's call them “pure”, that only depend on the data flowing through them. For pure pipelines, we can use techniques from distributed build systems to allow us to know what code was used for each step, not lose any previous results as we improve our algorithms and avoid repeating work that has been done already.
This talk contains interesting theory but is resolutely practical and with concrete examples in several languages and distributed computation frameworks.
YOU MAY ALSO LIKE:
Data Pipelines À La Mode
Tommy Hall
Theatre fan, occasional mountaineer, part time runner, thoroughly nice chap, available in fine bookstores everywhere.