Please log in to watch this conference skillscast.
Moderating discriminatory user content is a significant challenge for many online communities, especially in today's climate of polarized ideologies and increased xenophobia. Timothy Quinn, technology lead at Hatebase, will discuss some of the ways Hatebase has devised to identify and analyze hate speech, particularly across multilingual and regionally diverse ecosystems.
This talk will focus on real-world solutions to implementing moderation workflows, as well as on balancing free speech against legal and financial liabilities being imposed by governments. Although some technological material will be presented, anyone with an operational interest in content moderation will benefit from attending.
YOU MAY ALSO LIKE:
- Deep Learning Fundamentals with Leonardo De Marchi (Online Course on 8th - 11th February 2021)
- Introducing AWS Immersion Days (Online Meetup on 11th February 2021)
- Agile Functional Data Pipeline in Haskell: A Case Study of Multicloud API Binding (SkillsCast recorded in November 2020)
- Building a Custom Type Provider (SkillsCast recorded in October 2020)
Hate Speech as a Service
Timothy Quinn helps run Hatebase, a Toronto-based organization that uses natural language processing and predictive analysis to help companies rid their ecosystems of hate speech, while also helping government agencies, law enforcement and NGOs perform threat assessments in conflict areas.