Please log in to watch this conference skillscast.
This is due to many factors and in essence the way that the decision is being explained to the user. One potential purpose of explanations is to build the trust between the user and the AI application.
This can be achieved by providing the user with understanding of the scope of automated decision-making, and the reasons that led to a particular decision. Since, the GDPR does not appear to require accessing the “black box” to explain the internal logic of the decision-making system to data subjects. Therefore, with this in mind, this talk will try to answer the question “How can we build a trusted worthy automated decisions?”
By using the “black art power” of explanation techniques, the automated decisions can be fairer, more transparent and much understandable by the user. Further, during this talk, a framework will be presented to suggest two recommended approaches for developing the explain-ability with highlighting some of the possible challenges.
YOU MAY ALSO LIKE:
Community Session: Women Leading in AI - Explaining the Automated Decisions is a Black Art
Dr Samara Banno
Dr Samara Banno holds a PhD in statistical machine learning and AI from UK, her speciality in automated decision making systems.