ABSTRACT
A Situated Conversational Agent (SCA) is an agent that engages in dialog about the context within which it is embedded. An SCA is distinguished from non-situated conversational agents by an intimate connection of the agent's dialog to its embedding context, and by intricate dependencies between its linguistic and physical actions. Constructing an SCA that can interact naturally with users while engaged in collaborative physical tasks requires the agent to interleave decision making under uncertainty, action execution, and observation while maximizing expected utility over a sequence of interactions. These requirements can be fulfilled by modeling an SCA as a partially observable Markov decision process (POMDP). We show how POMDPs can be used to formalize and implement psycholinguistic proposals on how situated dialog participants collaborate in order to make and ground dialog contributions.
- H. Clark. Using Language. Cambridge University Press, 1996.Google Scholar
- H. Clark and M. Krych. Speaking while monitoring addressees for understanding. Journal of Memory and Language, 50(1):62--81, 2004.Google ScholarCross Ref
- T. Dean and K. Kanazawa. A model for reasoning about persistence and causation. Computational Intelligence Journal, 5(3):142--150, 1989. Google ScholarDigital Library
- D. DeVault and M. Stone. Managing ambiguities across utterances in dialogue. In The 2007 Workshop on the Semantics and Pragmatics of Dialogue (DECALOG 2007), 2007.Google Scholar
- D. Gergle, R. Kraut, and S. Fussell. Action as language in a shared visual space. In Proceedings of ACM Conference on Computer Supported Cooperative Work (CSCW 04), pages 487--496, 2004. Google ScholarDigital Library
- D. Gergle, R. Kraut, and S. Fussell. Language efficiency and visual technology: Minimizing collaborative effort with visual information. Journal of Language and Social Psychology, 23:491--517, 2004.Google ScholarCross Ref
- D. Gergle, D. Millen, R. Kraut, and S. Fussell. Persistence matters: Making the most of chat in tightly-coupled work. In Proceedings of CHI 2004, 2004. Google ScholarDigital Library
- L. Kaelbling, M. Littman, and A. Cassandra. Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101:99--134, 1998. Google ScholarDigital Library
- R. Kraut, S. Fussell, and J. Siegel. Visual information as a conversational resource in collaborative physical tasks. Human-Computer Interaction, 18:13--49, 2003. Google ScholarDigital Library
- T. Paek and E. Horvitz. Conversation as action under uncertainty. In Proceedings of the 16th Annual Conference on Uncertainty in Artificial Intelligence (UAI-00), San Francisco, CA, 2000. Morgan Kaufmann. Google ScholarDigital Library
- D. Traum. Computational models of grounding in collaborative systems. In Working Notes of the AAAI Fall Symposium on Psychological Models of Communication, pages 124--131. AAAI, November 1999.Google Scholar
- J. D. Williams, P. Poupart, and S. Young. Factored partially observable Markov decision processes for dialogue management. In 4th Workshop on Knowledge and Reasoning in Practical Dialog Systems, IJCAI, Edinburgh, 2005.Google Scholar
Index Terms
- Modeling situated conversational agents as partially observable Markov decision processes
Recommendations
Learning to control listening-oriented dialogue using partially observable markov decision processes
Our aim is to build listening agents that attentively listen to their users and satisfy their desire to speak and have themselves heard. This article investigates how to automatically create a dialogue control component of such a listening agent. We ...
Situated conversational agents
AAAI'07: Proceedings of the 22nd national conference on Artificial intelligence - Volume 2A Situated Conversational Agent (SCA) is an agent that engages in dialog about the context within which it is embedded. Situated dialog is characterized by its deep connection to the embedding context, and the precise cross-timing of linguistic and non-...
Partially Observable Risk-Sensitive Markov Decision Processes
We consider the problem of minimizing a certainty equivalent of the total or discounted cost over a finite and an infinite time horizon that is generated by a partially observable Markov decision process POMDP. In contrast to a risk-neutral decision ...
Comments