Recurrent Neural Networks (RNNs), and specifically a variant with Long Short-Term Memory (LSTM), are enjoying renewed interest as a result of successful applications in a wide range of machine learning problems that involve sequential data. I will summarize my own experience with training these models for automated image captioning and for generating text character by character, with a particular focus on understanding the source of their impressive performance and their limitations.
YOU MAY ALSO LIKE:
- Kito Mann's Hacking HTML5 Web Components and Polymer (in London on 10th - 11th July 2017)
- Lightbend's Fast Track to Akka with Java (in London on 16th - 18th August 2017)
- F# eXchange 2018 (in London on 5th - 6th April 2018)
Visualizing and Understanding Recurrent Networks
Andrej is a 5th year PhD student at Stanford University, working with Fei-Fei Li. His focus is on Deep Learning, with applications in Computer Vision, Natural Language Processing, and their intersection. He is visiting London during the summer as an intern at DeepMind.