Please log in to watch this conference skillscast.
The users of most of automated decision systems lack the trust with decisions produced by such (AI) systems.
This is due to many factors and in essence the way that the decision is being explained to the user. One potential purpose of explanations is to build the trust between the user and the AI application.
This can be achieved by providing the user with understanding of the scope of automated decision-making, and the reasons that led to a particular decision. Since, the GDPR does not appear to require accessing the “black box” to explain the internal logic of the decision-making system to data subjects. Therefore, with this in mind, this talk will try to answer the question “How can we build a trusted worthy automated decisions?”
By using the “black art power” of explanation techniques, the automated decisions can be fairer, more transparent and much understandable by the user. Further, during this talk, a framework will be presented to suggest two recommended approaches for developing the explain-ability with highlighting some of the possible challenges.
YOU MAY ALSO LIKE:
- Essential Techniques for Leading Software Teams in the Work From Home Era (Online Meetup on 13th August 2020)
- Web Scraping with GoLang (Online Meetup on 13th August 2020)
- Ceci n’est pas un canard - Machine Learning and Generative Adversarial Networks (SkillsCast recorded in August 2020)
- Digital Discrimination: Cognitive Bias in Machine Learning (SkillsCast recorded in June 2020)
Community Session: Women Leading in AI - Explaining the Automated Decisions is a Black Art
Dr Samara Banno holds a PhD in statistical machine learning and AI from UK, her speciality in automated decision making systems.