Please log in to watch this conference skillscast.
Moderating discriminatory user content is a significant challenge for many online communities, especially in today's climate of polarized ideologies and increased xenophobia. Timothy Quinn, technology lead at Hatebase, will discuss some of the ways Hatebase has devised to identify and analyze hate speech, particularly across multilingual and regionally diverse ecosystems.
This talk will focus on real-world solutions to implementing moderation workflows, as well as on balancing free speech against legal and financial liabilities being imposed by governments. Although some technological material will be presented, anyone with an operational interest in content moderation will benefit from attending.
YOU MAY ALSO LIKE:
- Leonardo De Marchi's Deep Learning Fundamentals (in London on 22nd - 23rd October 2019)
- Brian Sletten's Data Science with Python Workshop (in London on 18th - 20th November 2019)
- Scala eXchange London 2019 (in London on 12th - 13th December 2019)
- BeyondTech 2020 (in London on 23rd - 24th April 2020)
- Reinforcement Learning Journal Club (in London on 17th October 2019)
- Countdown to Big Data LDN (in London on 17th October 2019)
- Porcupine: Flows Your Rows with Arrows (SkillsCast recorded in October 2019)
- Automating Elaborate-Transform-Load for Busy Data Scientists (SkillsCast recorded in October 2019)
Hate Speech as a Service
Timothy Quinn helps run Hatebase, a Toronto-based organization that uses natural language processing and predictive analysis to help companies rid their ecosystems of hate speech, while also helping government agencies, law enforcement and NGOs perform threat assessments in conflict areas.