Contextual word embeddings are hitting the NLP community like a storm. Such representations are mainly derived from pre-trained language models (bidirectional) and have recently been shown to provide significant improvements to the state of the art for a wide range of NLP tasks. In the talk I will provide a brief overview of this hot research field, showing some examples of usage for pre-trained models available in the web (e.g., ELMo, BERT, etc).
YOU MAY ALSO LIKE:
- Arthas: Inside Alibaba's Java Diagnostic Tool (Online Meetup on 24th September 2020)
- Lean Code: How to Code Efficiently (Online Meetup on 22nd October 2020)
- Post Quantum Cryptography Apocalypse (SkillsCast recorded in September 2020)
- Ceci n’est pas un canard - Machine Learning and Generative Adversarial Networks (SkillsCast recorded in August 2020)
A deep dive into contextual word embeddings and understanding what NLP models learn
I'm an AI & Deep Learning Enthusiast. My broad areas of interest include natural language processing and machine learning in general. Currently, I am studying natural language processing models in the Facebook Artificial Intelligence Research (FAIR) lab in London. I hold a Ph.D. in Engineering in Computer Science from Sapienza University of Rome.