91

Event

Conscious processing, inductive biases and generalization in deep learning

Thursday, February 17, 2022 10:30to11:30
ZOOM, CA

DIC-ISC-CRIA Seminar at UQAM

Speaker: , Université de Montreal

Abstract:

Humans are very good at “out-of-distribution” generalization (compared to current AI systems). It would be useful to determine the inductive biases they exploit and translate them into machine-language architectures, training frameworks and experiments. I will discuss several of these hypothesized inductive biases. Many exploit notions in causality and connect abstractions in representation learning (perception and interpretation) with reinforcement learning (abstract actions). Systematic generalizations may arise from efficient factorization of knowledge into recomposable pieces. This is partly related to symbolic AI (aas seen in the errors and limitations of reasoning in humans, as well as in our ability to learn to do this at scale, with distributed representations and efficient search). Sparsity of the causal graph and locality of interventions -- observable in the structure of sentences -- may reduce the computational complexity of both inference (including planning) and learning. This may be why evolution incorporated this as "consciousness.” I will also suggest some open research questions to stimulate further research and collaborations. 

Bio:

Recognized worldwide as one of the leading experts in artificial intelligence, Yoshua Bengio is most known for his pioneering work in deep learning, earning him the 2018 A.M. Turing Award, “the Nobel Prize of Computing,” with Geoffrey Hinton and Yann LeCun.

He is a Full Professor at Université de Montréal, and the Founder and Scientific Director of Mila – Quebec AI Institute. He co-directs the CIFAR Learning in Machines & Brains program as Senior Fellow and acts as Scientific Director of IVADO.  He is also an alumnus of the Centre for Intelligent Machines.

Back to top