Sign Up

Anqi (Angie) Liu
Postdoctoral scholar research associate, Computing and Mathematical Sciences, California Institute of Technology

To create trustworthy AI systems, we must safeguard machine learning methods from catastrophic failures. For example, we must account for the uncertainty and guarantee the performance for safety-critical systems, like in autonomous driving and health care, before deploying them in the real world. A key challenge in such real-world applications is that the test cases are not well represented by the pre-collected training data.  To properly leverage learning in such domains, we must go beyond the conventional learning paradigm of maximizing average prediction accuracy with generalization guarantees that rely on strong distributional relationships between training and test examples.

In this talk, I will describe a distributionally robust learning framework that offers accurate uncertainty quantification and rigorous guarantees under data distribution shift. This framework yields appropriately conservative yet still accurate predictions to guide real-world decision-making and is easily integrated with modern deep learning.  I will showcase the practicality of this framework in applications on agile robotic control and computer vision.  I will also introduce a survey of other real-world applications that would benefit from this framework for future work.


Register in advance for this meeting.

After registering, you will receive a confirmation email containing information about joining the meeting.

User Activity

No recent activity

Anqi (Angie) Liu
Postdoctoral scholar research associate, Computing and Mathematical Sciences, California Institute of Technology

To create trustworthy AI systems, we must safeguard machine learning methods from catastrophic failures. For example, we must account for the uncertainty and guarantee the performance for safety-critical systems, like in autonomous driving and health care, before deploying them in the real world. A key challenge in such real-world applications is that the test cases are not well represented by the pre-collected training data.  To properly leverage learning in such domains, we must go beyond the conventional learning paradigm of maximizing average prediction accuracy with generalization guarantees that rely on strong distributional relationships between training and test examples.

In this talk, I will describe a distributionally robust learning framework that offers accurate uncertainty quantification and rigorous guarantees under data distribution shift. This framework yields appropriately conservative yet still accurate predictions to guide real-world decision-making and is easily integrated with modern deep learning.  I will showcase the practicality of this framework in applications on agile robotic control and computer vision.  I will also introduce a survey of other real-world applications that would benefit from this framework for future work.