Home > LectureSeries > Lecture 7

Lecture 7

Learning Symbols for Trustworthy AI

 

Abstract:

Recent advances in deep learning have led to novel AI-based solutions to challenging computational problems. Yet, the state-of-the-art models do not provide reliable explanations of how they make decisions, and can make occasional mistakes on even simple problems. The resulting lack of assurance and trust are obstacles to their adoption in safety-critical applications. Neurosymbolic learning architectures aim to address this challenge by bridging the complementary worlds of deep learning and logical reasoning via explicit symbolic representations. In this talk, I will describe representative neurosymbolic systems, and how they enable more accurate, interpretable, and domain-aware solutions to problems in healthcare and robotics.

 

Biodata of the Speaker:

An alumnus of IIT Kanpur and Stanford, Prof. Rajeev Alur is one of world leaders in the field of "Design and Analysis of Safe and Trustworthy Systems". He has won several accolades including the prestigious Knuth prize, the inaugural Alonzo Church award, the IIT Kanpur Distinguished Alumnus award and the inaugural CAV (Computer Aided Verification) award. He is currently Zisman Family Professor of Computer and Information Science and the Founding Director of ASSET Center for Trustworthy AI at University of Pennsylvania.

 

Eyes-on-Research | Inflections In Computing