Please log in to watch this conference skillscast.
Google’s Deep Dream images hit the headlines last year as the surreal and psychedelic images captured the public’s imagination. In this talk you will learn the surprisingly simple algorithm that is used to produce these images. You will discover how the “convolutional neural network” networks used to produce these images work and find out about the “model zoo” where ready trained neural networks can be found. You will explore more about the underlying structure of the neural network and how this affects the images generated. You will also learn about these models can used to classify images – a perhaps more practical application of this technology.
YOU MAY ALSO LIKE:
- Greg Young's CQRS, Domain Events, Event Sourcing and how to apply DDD (in Stockholm on 22nd - 24th May 2017)
- Whole Team Approach to Agile Testing (in London on 5th - 7th June 2017)
- µCon 2017: The Microservices Conference (in London on 6th - 7th November 2017)
- Serverless Architecture with Azure Functions with Christos Matskas! (in London on 29th November 2017)
Art and Neural Network with F# - Audience Level: Beginner
Robert Pickering is a software engineer with an interested in using functional programming, particularly F#, to solve real world problems.