Sorted by DateClassified by Publication TypeSorted by First Author Last NameClassified by Topic

A Finite Horizon DEC-POMDP Approach to Multi-robot Task Learning

Barış Eker, Ergin Özkucur, Çetin Meriçli, Tekin Meriçli, and H. Levent Akın. A Finite Horizon DEC-POMDP Approach to Multi-robot Task Learning. In The 5th International Conference on Application of Information and Communication Technologies, AICT2011, Baku, 2011.

Download

[PDF] 

Abstract

Decision making under uncertainty is one of the key problems of robotics and this problem is even harder in the multi-agent domain. Decentralized Partially Observable Markov Decision Process (DEC-POMDP) is an approach to model multi-agent decision making problems under uncertainty. There is no efficient exact algorithm to solve these problems since the worst case complexity of the general case has been shown to be NEXP-complete. This paper demonstrates the application of our proposed approximate solution algorithm, which uses evolution strategies, to various DEC-POMDP problems. We show that high level policies can be learned using simplified simulated environments which can readily be transferred to real robots despite having different observation and transition models in the training and the application domains.

BibTeX

@inproceedings{eker2011a,
  author    = {Barış Eker and Ergin Özkucur and Çetin Meriçli and Tekin Meriçli and H. Levent Akın},
  title     = {A Finite Horizon DEC-POMDP Approach to Multi-robot Task Learning},
  booktitle = {The 5th International Conference on Application of Information and Communication Technologies, AICT2011, Baku},
  year      = {2011},
  abstract  = {Decision making under uncertainty is one of the key problems of robotics  and this problem is even harder in the multi-agent domain. Decentralized Partially Observable Markov Decision Process (DEC-POMDP) is an approach to model multi-agent decision making problems under uncertainty. There is no efficient exact algorithm to solve these problems since the worst case complexity of the general case has been shown to be NEXP-complete. This paper demonstrates the application of our proposed approximate solution algorithm, which uses evolution strategies, to various DEC-POMDP problems. We show that high level policies can be learned using simplified simulated environments which can readily be transferred to real robots despite having different observation and transition models in the training and the application domains.},
  bib2html_pubtype = {Refereed Conference},
  bib2html_rescat = {Multi-robot Planning},
  bib2html_dl_pdf = {../files/ekerAICT2011DECPOMDP.pdf},
}

Generated by bib2html.pl (written by Patrick Riley ) on Thu May 01, 2014 16:27:43