Sign Up

Linyi Li 

PhD Candidate

Department of Computer Science

University of Illinois, Urbana-Champaign 

Along with the wide deployment of deep learning (DL) systems, their lack of trustworthiness (robustness, fairness, numerical reliability, etc) is raising serious social concerns, especially in safety-critical scenarios such as autonomous driving, aircraft navigation, and facial recognition. Hence, a rigorous and accurate evaluation of the trustworthiness of DL systems is critical before their large-scale deployment.

In this talk, I will introduce my research on certifying critical trustworthiness properties of large-scale DL systems. Inspired by techniques in optimization, cybersecurity, and software engineering, my work computes rigorous worst-case bounds to characterize the degree of trustworthiness for a given DL system and further improve such bounds via strategic training. Specifically, I will introduce two representative frameworks: (1) DSRS is the first framework with theoretically optimal certification tightness. DSRS along with our training method DRT and accompanying open-source tools (VeriGauge and alpha-beta-CROWN) is the state-of-the-art and award-winning solution for achieving DL robustness against constrained perturbations. (2) TSS is the first framework for building and certifying large DL systems with high accuracy against semantic transformations. TSS opens a series of subsequent research on guaranteeing semantic robustness for various downstream DL and AI applications. I will conclude this talk with a roadmap that outlines several core research questions and future directions on trustworthy machine learning.

0 people are interested in this event

Linyi Li 

PhD Candidate

Department of Computer Science

University of Illinois, Urbana-Champaign 

Along with the wide deployment of deep learning (DL) systems, their lack of trustworthiness (robustness, fairness, numerical reliability, etc) is raising serious social concerns, especially in safety-critical scenarios such as autonomous driving, aircraft navigation, and facial recognition. Hence, a rigorous and accurate evaluation of the trustworthiness of DL systems is critical before their large-scale deployment.

In this talk, I will introduce my research on certifying critical trustworthiness properties of large-scale DL systems. Inspired by techniques in optimization, cybersecurity, and software engineering, my work computes rigorous worst-case bounds to characterize the degree of trustworthiness for a given DL system and further improve such bounds via strategic training. Specifically, I will introduce two representative frameworks: (1) DSRS is the first framework with theoretically optimal certification tightness. DSRS along with our training method DRT and accompanying open-source tools (VeriGauge and alpha-beta-CROWN) is the state-of-the-art and award-winning solution for achieving DL robustness against constrained perturbations. (2) TSS is the first framework for building and certifying large DL systems with high accuracy against semantic transformations. TSS opens a series of subsequent research on guaranteeing semantic robustness for various downstream DL and AI applications. I will conclude this talk with a roadmap that outlines several core research questions and future directions on trustworthy machine learning.