About this Event
Title: Resilient Safe Control of Autonomous Systems
Abstract: Asimov's Three Laws of Robotics famously outlined fundamental safety principles governing robot-human interaction. This foundational concept of safety is paramount for today's autonomous systems, such as robots, which possess inherent cyber-physical properties. With the increasingly widespread application of Autonomous systems in real-world environments, the challenges facing research on formal safety verification have grown even more significant. However, end-to-end verification of such complex, integrated systems remains an open and formidable challenge due to their high dimensionality, nonlinearity, and the use of learning-based components. This thesis approaches this challenge by pursuing verifiably safe autonomy from two complementary directions: (i) safe control of learning-enabled systems providing formal guarantees and (ii) resilient safe control that maintains formal safety guarantees under extreme scenarios such as sensor faults and cyber-physical attacks.
The first half of this dissertation presents the formal verification of autonomous systems that integrate learning-enabled components. It starts with the safety verification of neural control barrier functions (NCBF) employing Rectified Linear Unit (ReLU) activation functions. By leveraging a generalization of Nagumo's theorem, we propose exact safety conditions for deterministic systems. To manage computational complexity, we enhance the efficiency of verification and synthesis using a VNN-based (Verification of Neural Networks) search algorithm and a neural breadth-first search algorithm. We further propose the synthesis and verification of safe control for stochastic systems.
The second half of this dissertation broadens the scope of end-to-end verification by explicitly accounting for imperfections and perturbations. We first proposed Fault-Tolerant Stochastic CBFs and NCBFs to provide safety guarantees for autonomous systems under state estimation error caused by low-dimensional sensor faults and attacks. We then investigate the unique challenges posed by Light Detection And Ranging (LiDAR) perception attacks. We propose a fault detection, identification, and isolation mechanism for 2D and 3D LiDAR and provide safe control under attacks.
Event Details
See Who Is Interested
0 people are interested in this event