Sign Up

Ravi Mangal 
Postdoctoral Researcher
Cylab 
Carnegie Mellon University
 

As deep neural networks (DNNs) demonstrate growing capabilities to solve complex tasks, there is a push to incorporate them as components in software and cyber-physical systems.  To reap the benefits of these learning-enabled systems without propagating harms, there is an urgent need to develop tools and methodologies for evaluating their safety. Formal methods are a powerful set of tools for analyzing behaviors of software systems. However, formal analysis of learning-enabled systems is challenging; DNNs are notoriously difficult to interpret and lack logical specifications, the environments in which these systems operate can be difficult to model mathematically, and existing formal methods do not scale to these complex systems.
 
In this talk, I will present a bottom-up and a top-down perspective for the analysis of such systems. The bottom-up perspective focuses on analyzing DNNs in isolation. To address the challenges in intepreting and specifying DNN behavior, I will present a logical specification language designed to facilitiate writing specifications about vision-based DNNs in terms of high-level, human-understandable concepts. I will then demonstrate how we can leverage vision-language models such as CLIP to encode and check these specifications. The top-down perspective focuses on analyzing learning-enabled systems as a whole. To address the challenges in modeling the environment and scaling formal analysis, I will present new probabilistic abstractions for DNN-based perception components in learning-enabled cyber-physical systems that make it feasible to formally analyze such systems.
 

Talk Location: Jubel 121