ABSTRACT
We study the problem of agents locating other agents that are both capable and willing to help complete assigned tasks. An agent incurs a fixed cost for each help request it sends out. To minimize this cost, the performance metric used in our work, an agent should learn based on past interactions to identify agents likely to help on a given task. We compare three trust mechanisms: success-based, learning-based, and random. We also consider different agent social attitudes: selfish, reciprocative, and helpful. We evaluate the performance of these social attitudes with both homogeneous and mixed societies. Our results show that learning-based trust decisions consistently performed better than other schemes. We also observed that the success rate is significantly better for reciprocative agents over selfish agents.
- C. Castelfranchi and R. Falcone. Principles of trust for MAS: Cognitive autonomy, social importance, and quantification. In Proceedings of the Third International Conference on Multiagent Systems, pages 72--79, Los Alamitos, CA, 1998. IEEE Computer Society. Google ScholarDigital Library
- S. D. Ramchurn, D. Huynh, and N. R. Jennings. Trust in multiagent system. The Knowledge Engineering Review, 19(1):1--25, 2004. Google ScholarDigital Library
- K. Fullam. Learning complex trust decision strategies. In Autonomous Agent and Multiagent Systems Conference, page 1241, 2006. Google ScholarDigital Library
- D. Gambetta. Trust. Basil Blackwell, Oxford, 1990.Google Scholar
- S. Saha, S. Sen, and P. S. Dutta. Helping based on future expectations. In Proceedings of the Second International Joint Conference on Autonomous Agents and Mulitagents System, pages 289--296, 2003. Google ScholarDigital Library
- S. Sen. Reciprocity: a foundational principle for promoting cooperative behavior among self-interested agents. In Second International Conference on Multiagent Systems, AAAI Press, Menlo Park, CA, pages 322--329, 1996.Google Scholar
Index Terms
- Learning task-specific trust decisions
Recommendations
Dynamically learning sources of trust information: experience vs. reputation
AAMAS '07: Proceedings of the 6th international joint conference on Autonomous agents and multiagent systemsTrust is essential when an agent must rely on others to provide resources for accomplishing its goals. When deciding whether to trust, an agent may rely on, among other types of trust information, its past experience with the trustee or on reputations ...
Learning to trust in the competence and commitment of agents
For agents to collaborate in open multi-agent systems, each agent must trust in the other agents' ability to complete tasks and willingness to cooperate. Agents need to decide between cooperative and opportunistic behavior based on their assessment of ...
Application of Learning to Trust-Adaptive Agents
SASOW '13: Proceedings of the 2013 IEEE 7th International Conference on Self-Adaptation and Self-Organizing Systems WorkshopsIn this paper we analyse and evaluate, in which ways learning techniques can be applied to agents in an open system, which have to map continuous situations into a continuous action space. The agents are part of an open desktop grid, where agents can ...
Comments