Neny46l7iwgajdbismih
Meet up

Towards Neural Theorem Proving at Scale

Tuesday, 21st May at CodeNode, London

This meetup was organised by London Data Science Journal Club in May 2019

Towards Neural Theorem Proving at Scale

Theorem provers and logic programming languages are one of the highlights of "classical" AI. They represent knowledge as a series of facts in predicate form and queries are connected to the facts by applying rules. The main issue with these rule-based systems becomes evident upon learning: The "classical" rule-learning methods (ILP, for example) tend to be slow and do not generalise well.

In this paper the authors (Minervini, Bosnjak, Rocktäschel, and Riedel) build upon their prior work to create a fast rule-based and rule-learning system. The core of their approach is to represent predicates as a list of embeddings. The application of a rule creates a new layer in a neural network, eventually trained by using gradient descent.

Prior work, useful for understanding this paper:

https://arxiv.org/abs/1705.11040

https://www.youtube.com/watch?v=2ovZnvVPiQ8&feature=youtu.be

A set of slides about explainable AI (the relevant part is written by Pasquale Minervini): https://github.com/xaitutorial2019/xaitutorial2019.github.io/raw/master/slides/aaai2019xai_tutorial.pdf

A note about the Journal Club format:

  1. The sessions usually start with a 5-10 minute introduction to the paper by the topic volunteer, followed by splitting into smaller groups to discuss the paper and other materials. We finish the session by coming together for about 15 minutes to discuss what we have learned as a group and ask questions around the room.

  2. There is no speaker at Journal Club. One of the community has volunteered their time to suggest the topic and start the session, but most of the discussion comes from within the groups.

  3. You will get more benefit from the session if you read the paper or other materials in advance. We try to provide (where we can find them) accompanying blog posts, relevant

Thanks to our sponsors

Attending Members

Overview

Towards Neural Theorem Proving at Scale

Theorem provers and logic programming languages are one of the highlights of "classical" AI. They represent knowledge as a series of facts in predicate form and queries are connected to the facts by applying rules. The main issue with these rule-based systems becomes evident upon learning: The "classical" rule-learning methods (ILP, for example) tend to be slow and do not generalise well.

In this paper the authors (Minervini, Bosnjak, Rocktäschel, and Riedel) build upon their prior work to create a fast rule-based and rule-learning system. The core of their approach is to represent predicates as a list of embeddings. The application of a rule creates a new layer in a neural network, eventually trained by using gradient descent.

Prior work, useful for understanding this paper:

https://arxiv.org/abs/1705.11040

https://www.youtube.com/watch?v=2ovZnvVPiQ8&feature=youtu.be

A set of slides about explainable AI (the relevant part is written by Pasquale Minervini): https://github.com/xaitutorial2019/xaitutorial2019.github.io/raw/master/slides/aaai2019xai_tutorial.pdf

A note about the Journal Club format:

  1. The sessions usually start with a 5-10 minute introduction to the paper by the topic volunteer, followed by splitting into smaller groups to discuss the paper and other materials. We finish the session by coming together for about 15 minutes to discuss what we have learned as a group and ask questions around the room.

  2. There is no speaker at Journal Club. One of the community has volunteered their time to suggest the topic and start the session, but most of the discussion comes from within the groups.

  3. You will get more benefit from the session if you read the paper or other materials in advance. We try to provide (where we can find them) accompanying blog posts, relevant

Thanks to our sponsors

Who's coming?

Attending Members