Utilize este identificador para referenciar este registo: http://hdl.handle.net/10071/5353
Autoria: Gil, Paulo
Nunes, Luís
Data: 30-Jul-2013
Título próprio: Hierarchical reinforcement learning using path clustering
Paginação: Vol. I, pp. 659 - 664
Título do evento: Conferência Ibérica de Sistemas e Tecnologias de Informação, CISTI 2013
Palavras-chave: reinforcement learning
Q-Learning
subgoals
options
Resumo: In this paper we intend to study the possibility to improve the performance of the Q-Learning algorithm, by automatically finding subgoals and making better use of the acquired knowledge. This research explores a method that allows an agent to gather information about sequences of states that lead to a goal, detect classes of common sequences and introduce the states at the end of these sequences as subgoals. We use the taxi-problem (a standard in Hierarchical Reinforcement Learning literature) and conclude that, even though this problem's scale is relatively small, in most of the cases subgoals do improve the learning speed, achieving relatively good results faster than standard Q-Learning. We propose a specific iteration interval as the most appropriate to insert subgoals in the learning process. We also found that early adoption of subgoals may lead to suboptimal learning. The extension to more challenging problems is an interesting subject for future work
Arbitragem científica: Sim
Acesso: Acesso Restrito
Aparece nas coleções:CTI-CRI - Comunicações a conferências internacionais

Ficheiros deste registo:
Ficheiro Descrição TamanhoFormato 
Hierarchical reinforcement learning using path clustering.pdf
  Restricted Access
300,85 kBAdobe PDFVer/Abrir Request a copy


FacebookTwitterDeliciousLinkedInDiggGoogle BookmarksMySpaceOrkut
Formato BibTex mendeley Endnote Logotipo do DeGóis Logotipo do Orcid 

Todos os registos no repositório estão protegidos por leis de copyright, com todos os direitos reservados.