skip to main content
10.1145/1143844.1143932acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicmlConference Proceedingsconference-collections
Article

An analytic solution to discrete Bayesian reinforcement learning

Authors Info & Claims
Published:25 June 2006Publication History

ABSTRACT

Reinforcement learning (RL) was originally proposed as a framework to allow agents to learn in an online fashion as they interact with their environment. Existing RL algorithms come short of achieving this goal because the amount of exploration required is often too costly and/or too time consuming for online learning. As a result, RL is mostly used for offline learning in simulated environments. We propose a new algorithm, called BEETLE, for effective online learning that is computationally efficient while minimizing the amount of exploration. We take a Bayesian model-based approach, framing RL as a partially observable Markov decision process. Our two main contributions are the analytical derivation that the optimal value function is the upper envelope of a set of multivariate polynomials, and an efficient point-based value iteration algorithm that exploits this simple parameterization.

References

  1. Boger, J., Poupart, P., Hoey, J., Boutilier, C., Fernie, G., & Mihailidis, A. (2005). A decision-theoretic approach to task assistance for persons with dementia. IJCAI (pp. 1293--1299). Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Crites, R. H., & Barto, A. G. (1996). Improving elevator performance using reinforcement learning. NIPS (pp. 1017--1023).Google ScholarGoogle Scholar
  3. Dearden, R., Friedman, N., & Andre, D. (1999). Model based Bayesian exploration. UAI (pp. 150--159). Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Dearden, R., Friedman, N., & Russell, S. (1998). Bayesian Q-learning. AAAI (pp. 761--768). Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. DeGroot, M. H. (1970). Optimal statistical decisions. New York: McGraw-Hill.Google ScholarGoogle Scholar
  6. Duff, M. (2002). Optimal learning: Computational procedures for Bayes-adaptive Markov decision processes. Doctoral dissertation, University of Massassachusetts Amherst. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Duff, M. (2003). Design for an optimal probe. ICML (pp. 131--138).Google ScholarGoogle Scholar
  8. Kaelbling, L. P. (1993). Learning in embedded systems. MIT Press. Google ScholarGoogle ScholarCross RefCross Ref
  9. Meuleau, N., & Bourgine, P. (1999). Exploration of multi-state environments: local measures and back-propagation of uncertainty. Machine Learning, 35, 117--154. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Ng, A., Kim, H. J., Jordan, M., & Sastry, S. (2003). Autonomous helicopter flight via reinforcement learning. NIPS.Google ScholarGoogle Scholar
  11. Porta, J. M., Spaan, M. T., & Vlassis, N. (2005). Robot planning in partially observable continuous domains. Proc. Robotics: Science and Systems.Google ScholarGoogle ScholarCross RefCross Ref
  12. Smallwood, R. D., & Sondik, E. J. (1973). The optimal control of partially observable Markov processes over a finite horizon. Operations Research, 21, 1071--1088.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Spaan, M. T. J., & Vlassis, N. (2005). Perseus: Randomized point-based value iteration for POMDPs. Journal of Artificial Intelligence Research, 24, 195--220. Google ScholarGoogle ScholarCross RefCross Ref
  14. Strens, M. (2000). A Bayesian framework for reinforcement learning. ICML. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning. Cambridge, MA: MIT Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Tesauro, G. J. (1995). Temporal difference learning and TD-Gammon. Communications of the ACM, 38, 58--68. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Wang, T., Lizotte, D., Bowling, M., & Schuurmans, D. (2005). Bayesian sparse sampling for on-line reward optimization. ICML. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. An analytic solution to discrete Bayesian reinforcement learning

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Other conferences
          ICML '06: Proceedings of the 23rd international conference on Machine learning
          June 2006
          1154 pages
          ISBN:1595933832
          DOI:10.1145/1143844

          Copyright © 2006 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 25 June 2006

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • Article

          Acceptance Rates

          ICML '06 Paper Acceptance Rate140of548submissions,26%Overall Acceptance Rate140of548submissions,26%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader