A SkillsCast for this session is not available.
Classical language modelling consists in assigning probabilities to sentences by factorizing the joint likelihood of the sentence into conditional likelihoods of each word given that word’s context. Neural language models further try to "embed" each word into a low-dimensional vector-space representation that can be learned as the language model is trained. When they are trained on very large corpora, these models can achieve state-of-the-art performance in many applications such as speech recognition or sentence completion.
YOU MAY ALSO LIKE:
Learning word embeddings from text
Piotr Mirowski is a data scientist who works as a software engineer at Microsoft Bing in London, where he focuses on NLP and deep learning techniques for search query formulation.