Off-policy Learning with Eligibility Traces: A Survey - IMS - Equipe Information, Multimodalité et Signal Accéder directement au contenu
Rapport (Rapport De Recherche) Année : 2013

Off-policy Learning with Eligibility Traces: A Survey

Matthieu Geist
Bruno Scherrer

Résumé

In the framework of Markov Decision Processes, off-policy learning, that is the problem of learning a linear approximation of the value function of some fixed policy from one trajectory possibly generated by some other policy. We briefly review on-policy learning algorithms of the literature (gradient-based and least-squares-based), adopting a unified algorithmic view. Then, we highlight a systematic approach for adapting them to off-policy learning with eligibility traces. This leads to some known algorithms - off-policy LSTD(λ), LSPE(λ), TD(λ), TDC/GQ(λ) - and suggests new extensions - off-policy FPKF(λ), BRM(λ), gBRM(λ), GTD2(λ). We describe a comprehensive algorithmic derivation of all algorithms in a recursive and memory-efficent form, discuss their known convergence properties and illustrate their relative empirical behavior on Garnet problems. Our experiments suggest that the most standard algorithms on and off-policy LSTD(λ)/LSPE(λ) - and TD(λ) if the feature space dimension is too large for a least-squares approach - perform the best.
Fichier principal
Vignette du fichier
jmlr.pdf (1022.28 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00644516 , version 1 (24-11-2011)
hal-00644516 , version 2 (12-04-2013)

Identifiants

Citer

Matthieu Geist, Bruno Scherrer. Off-policy Learning with Eligibility Traces: A Survey. [Research Report] 2013, pp.43. ⟨hal-00644516v2⟩
413 Consultations
418 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More