This talk presents a new design paradigm, called “learning-based control”, that is fundamentally different from traditional model-based control and model-free machine learning. Learning-based control is aimed at learning real-time optimal controllers directly from input-output data, for stability and robustness of dynamical systems in uncertain environments. Novel tools and methods for data-driven control are proposed as an entanglement of techniques from reinforcement learning and control theory. The effectiveness of learning-based control design is demonstrated via its applications to network systems such as connected and autonomous vehicles and neural science problems such as computational principles of human movement.
If you are a member of the WashU community, login with your WUSTL Key to interact with events, personalize your calendar, and get recommendations.Login with WUSTL Key
If you are not a member of the WashU community, please login via one of the options below to interact with our calendar.