A deep dive into contextual word embeddings and understanding what NLP models learn
Contextual word embeddings are hitting the NLP community like a storm. Such representations are mainly derived from pre-trained language models (bidirectional) and have recently been shown to provide significant improvements to the state of the art for a wide range of NLP tasks. In the talk I will provide a brief overview of this hot research field, showing some examples of usage for pre-trained models available in the web (e.g., ELMo, BERT, etc).
I'm an AI & Deep Learning Enthusiast. My broad areas of interest include natural language processing and machine learning in general. Currently, I am studying natural language processing models in the Facebook Artificial Intelligence Research (FAIR) lab in London. I hold a Ph.D. in Engineering in Computer Science from Sapienza University of Rome.