skip to main content
10.1145/1772690.1772779acmotherconferencesArticle/Chapter ViewAbstractPublication PageswwwConference Proceedingsconference-collections
research-article

Monitoring algorithms for negative feedback systems

Published:26 April 2010Publication History

ABSTRACT

There are many online systems where millions of users post original content such as videos, reviews of items such as products, services and businesses, etc. While there are general rules for good behavior or even formal Terms of Service, there are still users who post content that is not suitable. Increasingly, online systems rely on other users who view the posted content to provide feedback.

We study online systems where users report negative feedback, i.e., report abuse; these systems are quite distinct from much studied, traditional reputation systems that focus on eliciting popularity of content by various voting methods. The central problem that we study here is how to monitor the quality of negative feedback, that is, detect negative feedback which is incorrect, or perhaps even malicious. Systems address this problem by testing flags manually, which is an expensive operation. As a result, there is a tradeoff between the number of manual tests and the number of errors defined as the number of incorrect flags the monitoring system misses.

Our contributions are as follows:

  • We initiate a systematic study of negative feedbacks systems. Our framework is general enough to be applicable for a variety of systems. In this framework, the number of errors the system admits is bounded over the worst case of adversarial users while simultaneously the system performs only small amount of manual testing for multitude of standard users who might still err while reporting.

  • Our main contribution is a randomized monitoring algorithm that we call Adaptive Probabilistic Testing (APT), that is simple to implement and has guarantees on expected number of errors. Even for adversarial users, the total expected error is bounded by "N over N flags for a given e < 0. Simultaneously, the number of tests performed by the algorithm is within a constant factor of the best possible algorithm for standard users.

  • Finally, we present empirical study of our algorithm that shows its performance on both synthetic data and real data accumulated from a variety of negative feedback systems at Google. Our study indicates that the algorithm performs better than the analysis above shows.

References

  1. B. Adler and L. de Alfaro. A content-driven reputation system for the Wikipedia. In Proc. of the 16th Intl. World Wide Web Conf.(WWW 2007). Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. D. Berry and B. Fristedt. Bandit Problems. Chapman and Hall, 1985.Google ScholarGoogle ScholarCross RefCross Ref
  3. R. Bhattacharjee and A. Goel. Avoiding ballot stuffing in eBay-like reputation systems. In Proceedings of the 2005 ACM SIGCOMM workshop on Economics of peer-to-peer systems, pages 133--137. ACM New York, NY, USA, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. R. Bhattacharjee and A. Goel. Algorithms and incentives for robust ranking. In SODA, pages 425--433. Society for Industrial and Applied Mathematics Philadelphia, PA, USA, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. S. Brin and L. Page. The anatomy of a large-scale hypertextual Web search engine. Computer networks and ISDN systems, 30(1-7):107--117, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. D. Helmbold, N. Littlestone, and P. Long. Apple tasting. Information and Computation, 161(2):85--139, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. S. D. Kamvar, M. T. Schlosser, and H. Garcia-Molina. The eigentrust algorithm for reputation management in p2p networks. In WWW '03: Proceedings of the 12th international conference on World Wide Web, pages 640--651, New York, NY, USA, 2003. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. J. Kleinberg. Authoritative sources in a hyperlinked environment. Journal of the ACM, 46(5):604--632, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. E. Zheleva, A. Kolcz, and L. Getoor. Trusting spam reporters: A reporter-based reputation system for email filtering. ACM Trans. Inf. Syst., 27(1):1--27, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Monitoring algorithms for negative feedback systems

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        WWW '10: Proceedings of the 19th international conference on World wide web
        April 2010
        1407 pages
        ISBN:9781605587998
        DOI:10.1145/1772690

        Copyright © 2010 International World Wide Web Conference Committee (IW3C2)

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 26 April 2010

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate1,899of8,196submissions,23%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      ePub

      View this article in ePub.

      View ePub