Off-policy Learning with Eligibility Traces: A Survey - IMS - Equipe Information, Multimodalité et Signal Accéder directement au contenu
Article Dans Une Revue Journal of Machine Learning Research Année : 2014

Off-policy Learning with Eligibility Traces: A Survey

Bruno Scherrer

Résumé

In the framework of Markov Decision Processes, we consider linear \emph{off-policy} learning, that is the problem of learning a linear approximation of the value function of some fixed policy from one trajectory possibly generated by some other policy. We briefly review \emph{on-policy} learning algorithms of the literature (gradient-based and least-squares-based), adopting a unified algorithmic view. Then, we highlight a systematic approach for adapting them to \emph{off-policy} learning \emph{with eligibility traces}. This leads to some known algorithms -- off-policy LSTD($\lambda$), LSPE($\lambda$), TD($\lambda$), TDC/GQ($\lambda$) -- and suggests new extensions -- off-policy FPKF($\lambda$), BRM($\lambda$), gBRM($\lambda$), GTD2($\lambda$). We describe a comprehensive algorithmic derivation of all algorithms in a recursive and memory-efficent form, discuss their known convergence properties and illustrate their relative empirical behavior on Garnet problems. Our experiments suggest that the most standard algorithms on and off-policy LSTD($\lambda$)/LSPE($\lambda$) -- and TD($\lambda$) if the feature space dimension is too large for a least-squares approach -- perform the best.
Fichier principal
Vignette du fichier
jmlr.pdf (549.52 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-00921275 , version 1 (20-12-2013)

Identifiants

  • HAL Id : hal-00921275 , version 1

Citer

Matthieu Geist, Bruno Scherrer. Off-policy Learning with Eligibility Traces: A Survey. Journal of Machine Learning Research, 2014, 15 (1), pp.289-333. ⟨hal-00921275⟩
5835 Consultations
162 Téléchargements

Partager

Gmail Facebook X LinkedIn More