The Fourier transform is a cornerstone of modern data analysis.
I will show a simple Bayesian statistical model for which the posterior probability distribution is equivalent to the output of the Fourier transform.
I will show how this model can be implemented and run in probabilistic programming languages.
Although this probabilistic approach is much slower than the Fast Fourier Transform, I will discuss some use cases in which it may be advantageous.
YOU MAY ALSO LIKE:
The Slow Fourier Transform
Tom is a data science team leader building predictive analytics-based products, specialising in preference learning, visual analytics and marketing using Bayesian and deep learning methods, probabilistic and functional programming. He has a background as an experimental and computational neuroscientist - Tom obtained his PhD in Neuroscience from University College London by pouring slimy stuff on brain cells, which combined with a reaction-diffusion model allowed him to measure biophysical properties of synapses. As a Research Associate at Harvard Medical School, and then the Universities of Leicester and Nottingham, he built microscopes and Domain Specific Languages in Haskell to control them. Working at the intersection of experimental, theoretical and methodological neuroscience has given him a uniquely creative perspective on data science. As the Chief Data Science Officer at a creative social agency, he led a team team building a series of models for predicting and enhancing the impact an image will have in specific marketing contexts using Deep Learning model; attribution modelling from social media data, and a platform for delivering visual consumer advertising on social media. He is now working on an open source data science stack in Haskell.