About this Event
Artificial Intelligence (AI), particularly Reinforcement Learning (RL), has achieved great success in domains such as gameplay. However, RL has scalability and reliability issues which makes it challenging for RL to make an impact in safety-critical and large-scale systems such as power grids, transportation, smart city. In this talk, we show that integrating RL with model-structure and model-based control can address the scalability and reliability issues of RL. In the first part of the talk, we consider a networked multi-agent setting and we propose a Scalable Actor Critic framework that provably addresses the scalability issue of multi-agent RL. The key is to exploit a form of local interaction structure widely present in networked systems. In the second part, we consider a nonlinear control setting where the dynamics admit an approximate linear model, which is true for many systems such as the power grid. We show that exploiting the approximate linear model and model-based control can greatly improve the reliability of an important class of RL algorithms.
Event Details
See Who Is Interested
0 people are interested in this event
User Activity
No recent activity
About this Event
Artificial Intelligence (AI), particularly Reinforcement Learning (RL), has achieved great success in domains such as gameplay. However, RL has scalability and reliability issues which makes it challenging for RL to make an impact in safety-critical and large-scale systems such as power grids, transportation, smart city. In this talk, we show that integrating RL with model-structure and model-based control can address the scalability and reliability issues of RL. In the first part of the talk, we consider a networked multi-agent setting and we propose a Scalable Actor Critic framework that provably addresses the scalability issue of multi-agent RL. The key is to exploit a form of local interaction structure widely present in networked systems. In the second part, we consider a nonlinear control setting where the dynamics admit an approximate linear model, which is true for many systems such as the power grid. We show that exploiting the approximate linear model and model-based control can greatly improve the reliability of an important class of RL algorithms.