Enmblniynauq0yokgimy
SkillsCast

Introducing StackNet Meta-Modelling Framework - Intermediate

6th July 2017 in London at CodeNode

There are 42 other SkillsCasts available from Infiniteconf 2017 - the conference on Big Data and Fast Data

Please log in to watch this conference skillscast.

Https s3.amazonaws.com prod.tracker2 resource 41088130 skillsmatter conference skillscast o9nohu

In 1992 Wolpert et. al. introduced the concept of a meta model being trained on the outputs of various generalisers with the scope of minimizing the generalization error of a target variable. This methodology - that was named stacked generalization- was used successfully to improve performances in various tasks, including translating text to phonemes and improving the performance of a single surface-fitter [5]. Stacked generalization (or stacking) can be regarded as an ensemble modelling methodology and different versions of it are extensively used today in many applications, research and data challenges. A prominent such example is the winning solution in the Netflix prize competition where it combined more than 500 machine learning models through a stacking schema to get the smallest error for predicting the rating users would assign to movies.

Neural Networks were first created in an attempt to mimic the biological neural networks in the human Brain [3]. Such types of machine learning models are now being extensively used in many prediction problems due to their multi-layer-structures and their ability to generalize based on a plethora of different parameters accounting for many of their initial weaknesses (and their tendency to overfit or underfit). The advances in computing power and specifically the usage of GPUs has allowed such machine learning models to be run at greater speeds [4] shaping the form of today’s deep learning.

Join Marios and learn about the StackNet model – a computational, scalable and analytical framework implemented in Java that resembles a feedforward neural network and uses Wolpert’s stacked generalization in multiple levels to improve accuracy in classification problems. In contrast to feedforward neural networks, rather than being trained through back propagation, the network is built iteratively one layer at a time using Wolpert’s stacked generalization. StackNet’s ability to improve accuracy is demonstrated via creating different instances of StackNet models with multiple levels and architectures which are then used to rank best the likelihood of a certain song being created before or after 2002 using a set of 90 numerical attributes out of 515,345 songs that come from a subset of the Million Song Dataset[1].

[1] Bertin-Mahieux, T., Ellis, D. P., Whitman, B., & Lamere, P. (2011, October). The Million Song Dataset. In ISMIR (Vol. 2, No. 9, p. 10).

[2] Koren, Y. (2009). The bellkor solution to the netflix grand prize. Netflix prize documentation, 81, 1-10.

[3] Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6), 386.

[4] Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85-117.

[5] Wolpert, D. H. (1992). Stacked generalization. Neural networks, 5(2), 241-259.

YOU MAY ALSO LIKE:

Thanks to our sponsors

Introducing StackNet Meta-Modelling Framework - Intermediate

SkillsCast

Please log in to watch this conference skillscast.

Https s3.amazonaws.com prod.tracker2 resource 41088130 skillsmatter conference skillscast o9nohu

In 1992 Wolpert et. al. introduced the concept of a meta model being trained on the outputs of various generalisers with the scope of minimizing the generalization error of a target variable. This methodology - that was named stacked generalization- was used successfully to improve performances in various tasks, including translating text to phonemes and improving the performance of a single surface-fitter [5]. Stacked generalization (or stacking) can be regarded as an ensemble modelling methodology and different versions of it are extensively used today in many applications, research and data challenges. A prominent such example is the winning solution in the Netflix prize competition where it combined more than 500 machine learning models through a stacking schema to get the smallest error for predicting the rating users would assign to movies.

Neural Networks were first created in an attempt to mimic the biological neural networks in the human Brain [3]. Such types of machine learning models are now being extensively used in many prediction problems due to their multi-layer-structures and their ability to generalize based on a plethora of different parameters accounting for many of their initial weaknesses (and their tendency to overfit or underfit). The advances in computing power and specifically the usage of GPUs has allowed such machine learning models to be run at greater speeds [4] shaping the form of today’s deep learning.

Join Marios and learn about the StackNet model – a computational, scalable and analytical framework implemented in Java that resembles a feedforward neural network and uses Wolpert’s stacked generalization in multiple levels to improve accuracy in classification problems. In contrast to feedforward neural networks, rather than being trained through back propagation, the network is built iteratively one layer at a time using Wolpert’s stacked generalization. StackNet’s ability to improve accuracy is demonstrated via creating different instances of StackNet models with multiple levels and architectures which are then used to rank best the likelihood of a certain song being created before or after 2002 using a set of 90 numerical attributes out of 515,345 songs that come from a subset of the Million Song Dataset[1].

[1] Bertin-Mahieux, T., Ellis, D. P., Whitman, B., & Lamere, P. (2011, October). The Million Song Dataset. In ISMIR (Vol. 2, No. 9, p. 10).

[2] Koren, Y. (2009). The bellkor solution to the netflix grand prize. Netflix prize documentation, 81, 1-10.

[3] Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6), 386.

[4] Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85-117.

[5] Wolpert, D. H. (1992). Stacked generalization. Neural networks, 5(2), 241-259.

YOU MAY ALSO LIKE:

Thanks to our sponsors

About the Speaker

Introducing StackNet Meta-Modelling Framework - Intermediate