Recurrent Neural Networks (RNNs), and specifically a variant with Long Short-Term Memory (LSTM), are enjoying renewed interest as a result of successful applications in a wide range of machine learning problems that involve sequential data. I will summarize my own experience with training these models for automated image captioning and for generating text character by character, with a particular focus on understanding the source of their impressive performance and their limitations.
YOU MAY ALSO LIKE:
- Advanced Docker for Enterprise Operations (in London on 28th - 29th November 2019)
- iOSCon 2020 - The conference for iOS and Swift Developers (in London on 19th - 20th March 2020)
- Keynote by Richard Warburton and Sadiq Jaffer: Continuous Profiling in Production: What, Why and How (in London on 3rd April 2019)
Visualizing and Understanding Recurrent Networks
Andrej is a 5th year PhD student at Stanford University, working with Fei-Fei Li. His focus is on Deep Learning, with applications in Computer Vision, Natural Language Processing, and their intersection. He is visiting London during the summer as an intern at DeepMind.