skip to main content
10.1145/1160633.1160730acmconferencesArticle/Chapter ViewAbstractPublication PagesaamasConference Proceedingsconference-collections
Article

Agent interaction in distributed POMDPs and its implications on complexity

Published: 08 May 2006 Publication History

Abstract

The ability to coordinate effectively is critical for agents to accomplish their goals in a multi-agent system. A number of researchers have modeled the coordination problem for multi-agent systems using decision theory. The most general models have proven to be extremely complex to solve optimally (NEXP-complete). Some of the more restricted models have been much more tractable, though still difficult (NP-complete). What is missing is an understanding about why some models are much easier than others. This work fills this gap by providing a condition that distinguishes between problems in NP and those strictly harder than NP. This condition relates to the quantity of information each agent has about the others, and whether this information can be represented in a succinct way. We show that the class of problems that satisfy this condition is NP-complete. We illustrate this idea with two interaction protocols that satisfy the condition. For those problems that do not satisfy this condition we demonstrate how our theoretical results can be used to generate an NP approximation of the original problem.

References

[1]
R. Becker, V. Lesser, and S. Zilberstein. Analyzing myopic approaches for multi-agent communication. In Proceedings of the 2005 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, pages 550--557, Compiegne, France, September 2005. IEEE Computer Society.
[2]
R. Becker, S. Zilberstein, and V. Lesser. Decentralized Markov decision processes with structured transitions. In Proceedings of the Third International Joint Conference on Autonomous Agents and Multi Agent Systems, volume 1, pages 302--309, New York City, New York, July 2004. ACM Press.
[3]
R. Becker, S. Zilberstein, V. Lesser, and C. V. Goldman. Solving transition independent decentralized MDPs. Journal of Artificial Intelligence Research, 22:423--455, 2004.
[4]
D. S. Bernstein, R. Givan, N. Immerman, and S. Zilberstein. The complexity of decentralized control of Markov decision processes. Mathematics of Operations Research, 27(4):819--840, November 2002.
[5]
M. Ghavamzadeh and S. Mahadevan. Learning to communicate and act in cooperative multiagent systems using hierarchical reinforcement learning. In Proceedings of the Third International Joint Conference on Autonomous Agents and Multi Agent Systems, pages 1114--1121, New York, July 2004. ACM Press.
[6]
C. V. Goldman and S. Zilberstein. Optimizing information exchange in cooperative multi-agent systems. In Proceedings of the Second International Joint Conference on Autonomous Agents and Multi Agent Systems, pages 137--144, Melbourne, Australia, July 2003. ACM Press.
[7]
C. V. Goldman and S. Zilberstein. Decentralized control of cooperative systems: Categorization and complexity analysis. Journal of Artificial Intelligence Research, 22:143--174, 2004.
[8]
H. Li, E. H. Durfee, and K. G. Shin. Multiagent planning for agents with internal execution resource constraints. In Proceedings of the Second International Joint Conference on Autonomous Agents and Multi Agent Systems, pages 560--567, New York, NY, USA, 2003. ACM Press.
[9]
C. H. Papadimitriou and J. Tsitsiklis. On the complexity of designing distributed protocols. Information and Control, 53:211--218, 1982.
[10]
C. H. Papadimitriou and J. Tsitsiklis. Intractable problems in control theory. SIAM Journal on Control and Optimization, 24(4):639--654, 1986.
[11]
L. Peshkin, K.-E. Kim, N. Meuleau, and L. P. Kaelbling. Learning to cooperate via policy search. In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, pages 489--496, San Francisco, CA, 2000. Morgan Kaufmann.
[12]
D. V. Pynadath and M. Tambe. The communicative multiagent team decision problem: Analyzing teamwork theories and models. Journal of Artificial Intelligence Research, 16:389--423, 2002.
[13]
J. Shen, V. Lesser, and N. Carver. Minimizing communication cost in a distributed Bayesian network using a decentralized MDP. In Proceedings of the Second International Joint Conference on Autonomous Agents and Multi Agent Systems, pages 678--685, Melbourne, Australia, July 2003. ACM Press.
[14]
P. Xuan and V. Lesser. Multi-agent policies: From centralized ones to decentralized ones. In Proceedings of the First International Joint Conference on Autonomous Agents and Multi Agent Systems, pages 1098--1105. ACM Press, 2002.

Cited By

View all
  • (2013)Communication based on Interactive Dynamic Influence Diagrams in cooperative multi-agent systems2013 8th International Conference on Computer Science & Education10.1109/ICCSE.2013.6553883(56-61)Online publication date: Apr-2013
  • (2013)DEC‐MDP/POMDPMarkov Decision Processes in Artificial Intelligence10.1002/9781118557426.ch9(277-318)Online publication date: 7-Mar-2013
  • (2011)Towards a unifying characterization for quantifying weak coupling in dec-POMDPsThe 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 110.5555/2030470.2030475(29-36)Online publication date: 2-May-2011
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
AAMAS '06: Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
May 2006
1631 pages
ISBN:1595933034
DOI:10.1145/1160633
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 08 May 2006

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. complexity
  2. distributed POMDP
  3. interaction

Qualifiers

  • Article

Conference

AAMAS06
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,155 of 5,036 submissions, 23%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2
  • Downloads (Last 6 weeks)0
Reflects downloads up to 02 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2013)Communication based on Interactive Dynamic Influence Diagrams in cooperative multi-agent systems2013 8th International Conference on Computer Science & Education10.1109/ICCSE.2013.6553883(56-61)Online publication date: Apr-2013
  • (2013)DEC‐MDP/POMDPMarkov Decision Processes in Artificial Intelligence10.1002/9781118557426.ch9(277-318)Online publication date: 7-Mar-2013
  • (2011)Towards a unifying characterization for quantifying weak coupling in dec-POMDPsThe 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 110.5555/2030470.2030475(29-36)Online publication date: 2-May-2011
  • (2010)Introducing communication in Dis-POMDPs with locality of interactionWeb Intelligence and Agent Systems10.5555/1839537.18395428:3(303-311)Online publication date: 1-Aug-2010
  • (2010)A Rich Communication Model in Opportunistic Decentralized Decision MakingProceedings of the 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Volume 0210.1109/WI-IAT.2010.41(133-140)Online publication date: 31-Aug-2010
  • (2009)Measuring the expected gain of communicating constraint informationMultiagent and Grid Systems10.5555/1735317.17353225:4(427-449)Online publication date: 1-Dec-2009
  • (2009)Network Distributed POMDP with CommunicationNew Frontiers in Artificial Intelligence10.1007/978-3-642-00609-8_4(26-38)Online publication date: 6-Apr-2009
  • (2008)Reinforcement learning for DEC-MDPs with changing action sets and partially ordered dependenciesProceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 310.5555/1402821.1402865(1333-1336)Online publication date: 12-May-2008
  • (2008)Introducing Communication in Dis-POMDPs with Locality of InteractionProceedings of the 2008 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Volume 0210.1109/WIIAT.2008.316(169-175)Online publication date: 9-Dec-2008
  • (2008)Formal models and algorithms for decentralized decision making under uncertaintyAutonomous Agents and Multi-Agent Systems10.1007/s10458-007-9026-517:2(190-250)Online publication date: 1-Oct-2008
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media