Recurrent Neural Networks (RNNs), and specifically a variant with Long Short-Term Memory (LSTM), are enjoying renewed interest as a result of successful applications in a wide range of machine learning problems that involve sequential data. I will summarize my own experience with training these models for automated image captioning and for generating text character by character, with a particular focus on understanding the source of their impressive performance and their limitations.
YOU MAY ALSO LIKE:
- Bridging the communication gap with BDD by Pete Buckney! (in London on 15th March 2017)
- iOSCon Bytes with Uncle Bob Martin! (in London on 4th April 2017)
- Gáspár Nagy's Developing with SpecFlow (in London on 28th - 30th June 2017)
- Agile Testing & BDD eXchange 2017 (in London on 9th - 10th November 2017)
Visualizing and Understanding Recurrent Networks
Andrej is a 5th year PhD student at Stanford University, working with Fei-Fei Li. His focus is on Deep Learning, with applications in Computer Vision, Natural Language Processing, and their intersection. He is visiting London during the summer as an intern at DeepMind.