Please log in to watch this conference skillscast.
Moderating discriminatory user content is a significant challenge for many online communities, especially in today's climate of polarized ideologies and increased xenophobia. Timothy Quinn, technology lead at Hatebase, will discuss some of the ways Hatebase has devised to identify and analyze hate speech, particularly across multilingual and regionally diverse ecosystems.
This talk will focus on real-world solutions to implementing moderation workflows, as well as on balancing free speech against legal and financial liabilities being imposed by governments. Although some technological material will be presented, anyone with an operational interest in content moderation will benefit from attending.
YOU MAY ALSO LIKE:
- Fast Track to Machine Learning with Louis Dorard (in London on 15th - 17th July 2019)
- Real-time Systems with Spark Streaming and Kafka (in London on 23rd - 24th September 2019)
- Infiniteconf 2019 - The conference on Big Data and AI (in London on 4th - 5th July 2019)
- CloudNative London 2019 (in London on 25th - 27th September 2019)
- Causal inference and the data-fusion problem (in London on 18th June 2019)
- The wonders of IBM: Watson, artificial intelligence, Swift coding and databases! (in London on 19th June 2019)
- AI Auditing Framework (SkillsCast recorded in June 2019)
- From AVA to SIRI: Gendering of virtual personal assistants & Data Protection by Design (SkillsCast recorded in June 2019)
Hate Speech as a Service
Timothy Quinn helps run Hatebase, a Toronto-based organization that uses natural language processing and predictive analysis to help companies rid their ecosystems of hate speech, while also helping government agencies, law enforcement and NGOs perform threat assessments in conflict areas.