Contextual word embeddings are hitting the NLP community like a storm. Such representations are mainly derived from pre-trained language models (bidirectional) and have recently been shown to provide significant improvements to the state of the art for a wide range of NLP tasks. In the talk I will provide a brief overview of this hot research field, showing some examples of usage for pre-trained models available in the web (e.g., ELMo, BERT, etc).
YOU MAY ALSO LIKE:
- Deep Learning Fundamentals with Leonardo De Marchi (Online Course on 3rd - 6th May 2021)
- Introducing AWS Immersion Days (Online Meetup on 11th February 2021)
- NodeJS, ML, K8s and Unethical Face Recognition (SkillsCast recorded in December 2020)
- Building a Custom Type Provider (SkillsCast recorded in October 2020)
A deep dive into contextual word embeddings and understanding what NLP models learn
Fabio Petroni
I'm an AI & Deep Learning Enthusiast. My broad areas of interest include natural language processing and machine learning in general. Currently, I am studying natural language processing models in the Facebook Artificial Intelligence Research (FAIR) lab in London. I hold a Ph.D. in Engineering in Computer Science from Sapienza University of Rome.