Safe Reinforcement Learning for Continuous Spaces through Lyapunov-Constrained Behavior
Journal article, Peer reviewed
MetadataShow full item record
Original versionFrontiers in Artificial Intelligence and Applications. 2011, 70-79.
This paper presents a safe learning strategy for continuous state and action spaces by utilizing Lyapunov stability properties of the studied systems. The reinforcement learning algorithm Continous Actor-Critic Learning Automation (CACLA) is combined with the notion of control Lyapunov functions (CLF) to limit the learning and exploration behavior to operate inside the stability region of the system to ensure safe operation at all times. The paper extends previous results for discrete action sets to take advantage of the more general continuous actions sets, and show that the continuous method is able to find a comparable solution to the best discrete action choices while avoiding the need for good heuristic choices in the design process.