Please log in to watch this conference skillscast.
A common assumption is that we need significant amounts of data in order to do deep learning. Many companies wanting to adopt AI find themselves stuck in the “data gathering” phase and as a result delaying the use of AI to gain competitive advantage in their business. But how much data is enough? Can we get by with less?
In this talk we will explore the impact on our results when we use different amounts of data to train a classification model. It is actually possible to get by with much less data than we might expect. We will discuss why this might be so, in which particular areas this applies, and how we can use these ideas to improve how we train, deploy and engage end-users in our models.
YOU MAY ALSO LIKE:
How Much Data do you _really_ need for Deep Learning?
Noon van der Silk
Senior Software Engineer, Fix Planet Club
I'm a long-time programmer who has recently become extremely passionate and interested in the climate emergency. I've been working as a Haskell programmer for the last few years, after a bit of a diverse (programming) career in different fields from creative AI, teaching, and quantum computing. I spend most of my time reading books, and posting some small reviews - https://betweenbooks.com.au/ - and otherwise am really interested in community building, connecting people, kindness, and understanding how to build a sustainable business.