Meet up

Data & Model Validation

FREE: Register Now

Tuesday, 7th February, Online Meetup

This meetup is organised by Open Source MLOps Starts at 5:00 PM UTC (5:00 PM UTC)

Overview

This 4th edition of Open Source MLOps will focus on the theme of data and model validation. The event will bring together experts in the field of machine learning operations (MLOps) to discuss best practices for validating data and models using open source tools.

Attendees can expect to learn about the latest tools and techniques for validating data quality and model performance, as well as strategies for managing and monitoring models in production. The event will feature presentations and discussions on topics such as:

  • Strategies for automating data validation workflows using open source tools
  • Techniques for measuring and improving model performance using metrics like precision, recall, and accuracy
  • Best practices for monitoring and maintaining models in production
  • Discussion of the current open source MLOps landscape, and where the field is headed

Additionally, there will be an interactive Q&A sessions for some lively debate!

We're delighted to welcome two top class speakers to this event.

Shir Chorev - CTO @ Deepchecks

Luke Dyer - Machine Learning Engineer @ Peak.ai

Compered as usual by Matt Squire - CTO @ Fuzzy Labs

The event is intended for ML practitioners, data engineers, and anyone interested in using open source tools to improve the development and deployment of machine learning models. Whether you're a seasoned MLOps professional or just getting started, this event will provide valuable insights and practical tips for working with data and models in an open source context.

How to Efficiently Test ML Models & Data

Automatic testing for ML pipelines is hard. Part of the executed code is a model that was dynamically trained on a fresh batch of data, and silent failures are common. Therefore, it’s problematic to use known methodologies such as automating tests for predefined edge cases and tracking code coverage.

We’ll demonstrate common pitfalls with ML models, and cover best practices for automatically validating them: What should be tested in these pipelines? How can we verify that they’ll behave as we expect once in production?

We’ll discuss how to automate tests for these scenarios, introduce the deepchecks open source package for testing ML models and data, and demonstrate how it can aid the process.



Shir Chorev

Born and raised in Nahariya, Israel, Chorev is a former member of the Israel Defense Force's "Talpiot" program for technological leadership, as well as a member of the famous intelligence agency, Unit 8200. Now monitoring AI, rather than humans, Deepchecks' software continuously validates machine learning systems' performance and stability. The company has raised $4.3 million so far from investors like Grove Ventures and Hetz Ventures.


An evolution of data validation – why we chose Great Expectations

Ongoing good quality data is essential to any ML system. I’ll take a tour of some of the options and different data validation approaches we’ve implemented in different projects at Peak. What worked well, what didn’t work so well and why I think Great Expectations is a good fit for us.



Who's coming?

Attending Members