Measuring and Improving the Internal Conceptual Representations of Deep Learning

a hand touching a digital brain

Department of Electrical and Computer Engineering

Location: Burchard 430

Speaker: Ramakrishna Vedantam | Researcher, Technologist and Innovator

Abstract

Endowing machines with abstract, flexible conceptual representations and the ability to combine known concepts to make novel, “conceptual-leaps” is a long-standing goal of artificial intelligence (AI). In pursuit of this goal, I will discuss my works on the foundations of concept learning for deep learning models. In particular I will focus on: multimodal learning (to ground concept representations more precisely into the world), quantifying robustness (to assess if atomic concepts are learnt correctly) and machine reasoning (to combine known atomic concepts into novel, emergent ones). Finally, I will speculate on important research directions to pursue for realizing the promise of general, robust and human interpretable AI systems.

Biography

Headshot of Ramakrishna Vedantam outside, wearing glasses and smiling

Ramakrishna Vedantam is a researcher, technologist and innovator who has made fundamental contributions to various subfields of AI such as multimodal learning, representation learning and core machine learning. Among other things, he is famous for having devised the CIDEr metric for multimodal evaluation and the GradCAM method for interpretability. Overall his research has been cited 28K times and he has won awards such as the Google PhD fellowship in 2018.