Skip navigation
Logo
User training | Reference and search service

Library catalog

Retrievo
EDS
b-on
More
resources
Content aggregators
Please use this identifier to cite or link to this item:

acessibilidade

http://hdl.handle.net/10071/5353
acessibilidade
Title: Hierarchical reinforcement learning using path clustering
Authors: Gil, Paulo
Nunes, Luís
Keywords: reinforcement learning
Q-Learning
subgoals
options
Issue Date: 30-Jul-2013
Abstract: In this paper we intend to study the possibility to improve the performance of the Q-Learning algorithm, by automatically finding subgoals and making better use of the acquired knowledge. This research explores a method that allows an agent to gather information about sequences of states that lead to a goal, detect classes of common sequences and introduce the states at the end of these sequences as subgoals. We use the taxi-problem (a standard in Hierarchical Reinforcement Learning literature) and conclude that, even though this problem's scale is relatively small, in most of the cases subgoals do improve the learning speed, achieving relatively good results faster than standard Q-Learning. We propose a specific iteration interval as the most appropriate to insert subgoals in the learning process. We also found that early adoption of subgoals may lead to suboptimal learning. The extension to more challenging problems is an interesting subject for future work
Peer reviewed: Sim
URI: http://hdl.handle.net/10071/5353
Appears in Collections:CTI-CRI - Comunicações a conferências internacionais

Files in This Item:
acessibilidade
File Description SizeFormat 
Hierarchical reinforcement learning using path clustering.pdf300.85 kBAdobe PDFView/Open    Request a copy


FacebookTwitterDeliciousLinkedInDiggGoogle BookmarksMySpace
Formato BibTex MendeleyEndnote Currículo DeGóis 

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.