Sign Up

6548 Forest Park Pkwy, St. Louis, MO 63112, USA

Dr. Siddharth Srivastava
Assistant Professor of Computing & Augmented Intelligence, Arizona State University

Can we balance efficiency and reliability while designing assistive AI systems? What would such AI systems need to provide? In this talk I will present some of our recent work addressing these questions. In particular, I will show that a few fundamental principles of abstraction are surprisingly effective in designing efficient and reliable AI systems that can plan and act over multiple timesteps. Our results show that abstraction mechanisms are invaluable not only in improving the efficiency of sequential decision making, but also in developing AI systems that can explain their own behavior to non-experts, and in computing user-interpretable assessments of the limits and capabilities of Black-Box AI systems. I will also present some of our work on learning the requisite abstractions in a bottom-up fashion. Throughout the talk I will highlight the theoretical guarantees that our methods provide along with results from empirical evaluations featuring decision-support/digital AI systems and physical robots.

  • Ronan Pickell
  • Quentin Hooks
  • Arthur Li
  • Justine Craig-Meyer

4 people are interested in this event


Register in advance for this meeting.

After registering, you will receive a confirmation email containing information about joining the meeting.

6548 Forest Park Pkwy, St. Louis, MO 63112, USA

Dr. Siddharth Srivastava
Assistant Professor of Computing & Augmented Intelligence, Arizona State University

Can we balance efficiency and reliability while designing assistive AI systems? What would such AI systems need to provide? In this talk I will present some of our recent work addressing these questions. In particular, I will show that a few fundamental principles of abstraction are surprisingly effective in designing efficient and reliable AI systems that can plan and act over multiple timesteps. Our results show that abstraction mechanisms are invaluable not only in improving the efficiency of sequential decision making, but also in developing AI systems that can explain their own behavior to non-experts, and in computing user-interpretable assessments of the limits and capabilities of Black-Box AI systems. I will also present some of our work on learning the requisite abstractions in a bottom-up fashion. Throughout the talk I will highlight the theoretical guarantees that our methods provide along with results from empirical evaluations featuring decision-support/digital AI systems and physical robots.