Utilize este identificador para referenciar este registo: http://hdl.handle.net/10071/5328
Registo completo
Campo DCValorIdioma
dc.contributor.authorOliveira, D. de.-
dc.contributor.authorBazzan, A. L. C.-
dc.contributor.authorSilva, B. C. da.-
dc.contributor.authorBasso, E. W.-
dc.contributor.authorNunes, L.-
dc.contributor.authorRossetti, R.-
dc.contributor.authorOliveira, E. de.-
dc.contributor.authorSilva, R. da.-
dc.contributor.authorLamb, L.-
dc.contributor.editorDunin-Kęplicz, B., Omicini, A., and Padget, J.-
dc.date.accessioned2013-07-18T08:34:18Z-
dc.date.available2013-07-18T08:34:18Z-
dc.date.issued2006-
dc.identifier.citationOliveira, D. de., Bazzan, A. L. C., Silva, B. C. da., Basso, E. W., Nunes, L., Rossetti, R., Oliveira, E. de., Silva, R. da., & Lamb, L. (2006). Reinforcement learning-based control of traffic lights in non-stationary environments: A case study in a microscopic simulator. CEUR Workshop Proceedings. European Workshop on Multi-Agent Systems 2006, 223. http://hdl.handle.net/10071/5328-
dc.identifier.issn1613-0073-
dc.identifier.urihttp://hdl.handle.net/10071/5328-
dc.description.abstractCoping with dynamic changes in traffic volume has been the object of recent publications. Recently, a method was proposed, which is capable of learning in non-stationary scenarios via an approach to detect context changes. For particular scenarios such as the traffic control one, the performance of that method is better than a greedy strategy, as well as other reinforcement learning approaches, such as Q-learning and Prioritized Sweeping. The goal of the present paper is to assess the feasibility of applying the above mentioned approach in a more realistic scenario, implemented by means of a microscopic traffic simulator. We intend to show that to use of context detection is suitable to deal with noisy scenarios where non-stationarity occurs not only due to the changing volume of vehicles, but also because of the random behavior of drivers in what regards the operational task of driving (e.g. deceleration probability). The results confirm the tendencies already detected in the previous paper, although here the increase in noise makes the learning task much more difficult, and the correct separation of contexts harder.eng
dc.language.isoeng-
dc.publisherCEUR-WS-
dc.relation.ispartofCEUR Workshop Proceedings. European Workshop on Multi-Agent Systems 2006-
dc.rightsrestrictedAccesspor
dc.titleReinforcement learning-based control of traffic lights in non-stationary environments: A case study in a microscopic simulatoreng
dc.typeconferenceObject-
dc.event.title4th European Workshop on Multi-Agent Systems (EUMAS'06)-
dc.event.typeWorkshoppt
dc.event.locationLisboaeng
dc.event.date2006-
dc.pagination31-42por
dc.publicationstatusPublicadopor
dc.peerreviewedyes-
dc.volume223-
dc.date.updated2023-03-07T12:01:06Z-
dc.subject.fosDomínio/Área Científica::Ciências Naturais::Ciências da Computação e da Informaçãopor
iscte.identifier.cienciahttps://ciencia.iscte-iul.pt/id/ci-pub-42671-
iscte.alternateIdentifiers.scopus2-s2.0-78650472600-
Aparece nas coleções:CTI-CRI - Comunicações a conferências internacionais

Ficheiros deste registo:
Ficheiro Descrição TamanhoFormato 
RLinITSUMOcr.pdf
  Restricted Access
255,26 kBAdobe PDFVer/Abrir Request a copy


FacebookTwitterDeliciousLinkedInDiggGoogle BookmarksMySpaceOrkut
Formato BibTex mendeley Endnote Logotipo do DeGóis Logotipo do Orcid 

Todos os registos no repositório estão protegidos por leis de copyright, com todos os direitos reservados.