skip to main content
10.1145/1160633.1160777acmconferencesArticle/Chapter ViewAbstractPublication PagesaamasConference Proceedingsconference-collections
Article

Can good learners always compensate for poor learners?

Published: 08 May 2006 Publication History

Abstract

Can a good learner compensate for a poor learner when paired in a coordination game? Previous work presented an example where a special learning algorithm (FMQ) is capable of doing just that when paired with a specific less capable algorithm even in games which stump the poorer algorithm when paired with itself. We argue that this result is not general. We give a straightforward extension to the coordination game in which FMQ cannot compensate for the lesser algorithm. We also provide other problematic pairings, and argue that another high-quality algorithm cannot do so either.

References

[1]
M. Bowling and M. Veloso. Multiagent learning using a variable learning rate. Artificial Intelligence, 136(2):215--250, 2002.
[2]
C. Claus and C. Boutilier. The dynamics of reinforcement learning in cooperative multiagent systems. In Proceedings of National Conference on Artificial IntelligenceAAAI/IAAI, pages 746--752, 1998.
[3]
K. De Jong. Evolutionary Computation: A unified approach. MIT Press, 2006.
[4]
J. Hu and M. Wellman. Multiagent reinforcement learning: theoretical framework and an algorithm. In Proceedings of the Fifteenth International Conference on Machine Learning, pages 242--250. Morgan Kaufmann, San Francisco, CA, 1998.
[5]
S. Kapetanakis and D. Kudenko. Reinforcement learning of coordination in cooperative multi-agent systems. In Proceedings of the Nineteenth National Conference on Artificial Intelligence (AAAI02), 2002.
[6]
S. Kapetanakis and D. Kudenko. Reinforcement learning of coordination in heterogeneous cooperative multi-agent systems. In Proceedings of the Third Autonomous Agents and Multi-Agent Systems Conference (AAMAS 2004), 2004.
[7]
M. Lauer and M. Riedmiller. An algorithm for distributed reinforcement learning in cooperative multi-agent systems. In Proceedings of the Seventeenth International Conference on Machine Learning, pages 535--542. Morgan Kaufmann, San Francisco, CA, 2000.
[8]
M. Littman. Markov games as a framework for multi-agent reinforcement learning. In Proceedings of the 11th International Conference on Machine Learning (ML-94), pages 157--163, New Brunswick, NJ, 1994. Morgan Kaufmann.

Index Terms

  1. Can good learners always compensate for poor learners?

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    AAMAS '06: Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
    May 2006
    1631 pages
    ISBN:1595933034
    DOI:10.1145/1160633
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 08 May 2006

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. cooperative games
    2. multiagent systems
    3. reinforcement learning

    Qualifiers

    • Article

    Conference

    AAMAS06
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 1,155 of 5,036 submissions, 23%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 182
      Total Downloads
    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 19 Feb 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media