Please log in to watch this conference skillscast.
Gradient descent continues to be our main work horse for training neural networks. One recurring problem though is the large amount of data required. Meta learning frames the problem not as learning from a single large dataset, but learning how to learn from multiple related smaller datasets. In this talk we'll first discuss some key concepts around gradient descent; fine-tuning, transfer learning, joint training and catastrophic forgetting and compare them to how simple meta learning techniques can make optimisation feasible for much smaller datasets.
YOU MAY ALSO LIKE:
Practical Learning To Learn
Mat Kelcey
Machine Learning PrincipalThoughtWorks