Show simple item record

dc.contributor.authorFjerdingen, Sigrud Aksnes
dc.contributor.authorKyrkjebø, Erik
dc.date.accessioned2017-02-13T09:43:08Z
dc.date.available2017-02-13T09:43:08Z
dc.date.created2012-02-15T14:51:56Z
dc.date.issued2011
dc.identifier.citationFrontiers in Artificial Intelligence and Applications. 2011, 70-79.nb_NO
dc.identifier.issn0922-6389
dc.identifier.urihttp://hdl.handle.net/11250/2430386
dc.description.abstractThis paper presents a safe learning strategy for continuous state and action spaces by utilizing Lyapunov stability properties of the studied systems. The reinforcement learning algorithm Continous Actor-Critic Learning Automation (CACLA) is combined with the notion of control Lyapunov functions (CLF) to limit the learning and exploration behavior to operate inside the stability region of the system to ensure safe operation at all times. The paper extends previous results for discrete action sets to take advantage of the more general continuous actions sets, and show that the continuous method is able to find a comparable solution to the best discrete action choices while avoiding the need for good heuristic choices in the design process.
dc.language.isoengnb_NO
dc.titleSafe Reinforcement Learning for Continuous Spaces through Lyapunov-Constrained Behaviornb_NO
dc.typeJournal articlenb_NO
dc.typePeer reviewednb_NO
dc.source.pagenumber70-79nb_NO
dc.source.journalFrontiers in Artificial Intelligence and Applicationsnb_NO
dc.identifier.cristin909648
cristin.unitcode7401,90,23,0
cristin.unitnameAnvendt kybernetikk
cristin.ispublishedtrue
cristin.fulltextpostprint
cristin.qualitycode1


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record