R1magpxcasckz7bvakec
SkillsCast

Visualizing and Understanding Recurrent Networks

10th September 2015 in London at CodeNode

There are 2 other SkillsCasts available from GPUs, Recurrent Neural Networks & Developing Better Drugs with Deep Learning

Recurrent Neural Networks (RNNs), and specifically a variant with Long Short-Term Memory (LSTM), are enjoying renewed interest as a result of successful applications in a wide range of machine learning problems that involve sequential data. I will summarize my own experience with training these models for automated image captioning and for generating text character by character, with a particular focus on understanding the source of their impressive performance and their limitations.

Thanks to our sponsors

Visualizing and Understanding Recurrent Networks

Andrej Karpathy

Andrej is a 5th year PhD student at Stanford University, working with Fei-Fei Li. His focus is on Deep Learning, with applications in Computer Vision, Natural Language Processing, and their intersection. He is visiting London during the summer as an intern at DeepMind.