With increasing regularity we see stories in the news about machine learning algorithms causing real-world harm. People’s lives and livelihood are affected by the decisions made by machines. Learn about how bias can take root in machine learning algorithms and ways to overcome it.
From the power of open source, to tools built to detect and remove bias in machine learning models, there is a vibrant ecosystem of contributors who are working to build a digital future that is inclusive and fair. Now you can become part of the solution.
Maureen's slides and links to the resources provided throughout the talk are available here.
YOU MAY ALSO LIKE:
- Arthas: Inside Alibaba's Java Diagnostic Tool (Online Meetup on 24th September 2020)
- Lean Code: How to Code Efficiently (Online Meetup on 22nd October 2020)
- Post Quantum Cryptography Apocalypse (SkillsCast recorded in September 2020)
- Ceci n’est pas un canard - Machine Learning and Generative Adversarial Networks (SkillsCast recorded in August 2020)
Digital Discrimination: Cognitive Bias in Machine Learning
Maureen McElaney is a Developer Advocate at IBM Center of Open Source Data and Ai Technologies. She is on the LF AI Trusted AI Committee underneath the Linux Foundation. She is an organizer for Women in Machine Learning and Data Science and on the board of the Vermont Technology Alliance. She is an experienced community builder and is passionate about building diversity (of all kinds) in tech through education, mentorship, and advocacy.