Contextual word embeddings are hitting the NLP community like a storm. Such representations are mainly derived from pre-trained language models (bidirectional) and have recently been shown to provide significant improvements to the state of the art for a wide range of NLP tasks. In the talk I will provide a brief overview of this hot research field, showing some examples of usage for pre-trained models available in the web (e.g., ELMo, BERT, etc).
YOU MAY ALSO LIKE:
- Applied Domain-Driven Design — Full-Stack Event Sourcing (in Online Event on 9th July 2020)
- Retrospectives Antipatterns — Team Meetings That Don't Suck (in Online Event on 16th July 2020)
- Digital Discrimination: Cognitive Bias in Machine Learning (SkillsCast recorded in June 2020)
- Using A.I. to Help Humanitarian Causes (SkillsCast recorded in May 2020)
A deep dive into contextual word embeddings and understanding what NLP models learn
I'm an AI & Deep Learning Enthusiast. My broad areas of interest include natural language processing and machine learning in general. Currently, I am studying natural language processing models in the Facebook Artificial Intelligence Research (FAIR) lab in London. I hold a Ph.D. in Engineering in Computer Science from Sapienza University of Rome.