Recent advances in machine learning & AI, computer vision, and control theory offer a tremendous opportunity to deploy autonomous robot systems to uncharted environments to accomplish complex missions. Such tasks are particularly challenging as they require the robots to operate in environments with unknown structure, degraded environmental conditions, severe communication and sensing constraints, and expansive areas of operation.
With the goal of ultimately designing safe, robust and autonomous robotic systems that can operate in these conditions, in the first part of this talk, I will present a novel autonomy control loop that embraces learning-based perception, area mapping, and a novel planning method that allows teams of robots to safely accomplish complex missions in unknown but continuously learned environments. Particularly, the proposed method generates reactive control actions that adapt to the environmental map that is continuously learned using learning-based perception systems. Theoretical and experimental results supporting the proposed method will be presented.
Although deep learning for perception has now become one of the main sensing modalities in autonomous robots, adversarial attacks to neural networks have exposed the brittleness of learning-based perception components. Motivated by this, in the second part of the talk, I will briefly present a novel defense method that can detect whether input images to perception systems are clean or they have been manipulated adversarially (e.g., by placing adversarial stickers on traffic signs). Experiments on real-world data will be presented showing that the proposed method outperforms state-of-the-art methods in terms of scalability and detection performance
If you are a member of the WashU community, login with your WUSTL Key to interact with events, personalize your calendar, and get recommendations.Login with WUSTL Key
If you are not a member of the WashU community, please login via one of the options below to interact with our calendar.