With increasing regularity we see stories in the news about machine learning algorithms causing real-world harm. People’s lives and livelihood are affected by the decisions made by machines. Learn about how bias can take root in machine learning algorithms and ways to overcome it.
From the power of open source, to tools built to detect and remove bias in machine learning models, there is a vibrant ecosystem of contributors who are working to build a digital future that is inclusive and fair. Now you can become part of the solution.
Maureen's slides and links to the resources provided throughout the talk are available here.
YOU MAY ALSO LIKE:
- Applied Domain-Driven Design — Full-Stack Event Sourcing (in Online Event on 9th July 2020)
- Retrospectives Antipatterns — Team Meetings That Don't Suck (in Online Event on 16th July 2020)
- Using A.I. to Help Humanitarian Causes (SkillsCast recorded in May 2020)
- Love the Brain You're In (SkillsCast recorded in October 2019)
Digital Discrimination: Cognitive Bias in Machine Learning
Maureen McElaney is a Developer Advocate at IBM Center of Open Source Data and Ai Technologies. She is on the LF AI Trusted AI Committee underneath the Linux Foundation. She is an organizer for Women in Machine Learning and Data Science and on the board of the Vermont Technology Alliance. She is an experienced community builder and is passionate about building diversity (of all kinds) in tech through education, mentorship, and advocacy.