Safe Reinforcement Learning for Continuous Spaces through Lyapunov-Constrained Behavior
Journal article, Peer reviewed
Permanent lenke
http://hdl.handle.net/11250/2430386Utgivelsesdato
2011Metadata
Vis full innførselSamlinger
- Publikasjoner fra CRIStin - SINTEF AS [6009]
- SINTEF Digital [2568]
Originalversjon
Frontiers in Artificial Intelligence and Applications. 2011, 70-79.Sammendrag
This paper presents a safe learning strategy for continuous state and action spaces by utilizing Lyapunov stability properties of the studied systems. The reinforcement learning algorithm Continous Actor-Critic Learning Automation (CACLA) is combined with the notion of control Lyapunov functions (CLF) to limit the learning and exploration behavior to operate inside the stability region of the system to ensure safe operation at all times. The paper extends previous results for discrete action sets to take advantage of the more general continuous actions sets, and show that the continuous method is able to find a comparable solution to the best discrete action choices while avoiding the need for good heuristic choices in the design process.