skip to main content
10.1145/2047196.2047202acmconferencesArticle/Chapter ViewAbstractPublication PagesuistConference Proceedingsconference-collections
research-article

CrowdForge: crowdsourcing complex work

Published:16 October 2011Publication History

ABSTRACT

Micro-task markets such as Amazon's Mechanical Turk represent a new paradigm for accomplishing work, in which employers can tap into a large population of workers around the globe to accomplish tasks in a fraction of the time and money of more traditional methods. However, such markets have been primarily used for simple, independent tasks, such as labeling an image or judging the relevance of a search result. Here we present a general purpose framework for accomplishing complex and interdependent tasks using micro-task markets. We describe our framework, a web-based prototype, and case studies on article writing, decision making, and science journalism that demonstrate the benefits and limitations of the approach.

References

  1. Bal, H.E., Steiner, J.G. and Tanenbaum, A.S. Programming languages for distributed computing systems. ACM Computing Surveys (CSUR) (1989) vol. 21 (3). Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Becker, G.S. and Murphy, K.M. The division of labor, coordination costs, and knowledge. The Quarterly Journal of Economics (1992) vol. 107 (4) pp. 1137--1160Google ScholarGoogle ScholarCross RefCross Ref
  3. Boto, http://code.google.com/p/boto/Google ScholarGoogle Scholar
  4. Callison-Burch, C. Fast, cheap, and creative: evaluating translation quality using Amazon's Mechanical Turk. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 (2009) pp. 286--295 Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Cole, F., Sanik, K., DeCarlo, D., Finkelstein, A., Funkhouser, T., Rusinkiewicz, S., and Singh, M. How well do line drawings depict shape? In ACM SIGGRAPH (2009), 1--9. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Dean, J. and Ghemawat, S. Map Reduce: Simplified data processing on large clusters. Communications of the ACM, 51, 1 (2008), 107--114. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L. "ImageNet: A large-scale hierarchical image database," cvpr, pp.248--255, IEEE Conference on Computer Vision and Pattern Recognition (2009).Google ScholarGoogle Scholar
  8. Diehl, M. and Stroebe, W. Productivity loss in brainstorming groups: Toward the solution of a riddle. Journal of personality and social psychology 53, 3 (1987), 497--509.Google ScholarGoogle Scholar
  9. Django, http://www.djangoproject.comGoogle ScholarGoogle Scholar
  10. Ipeirotis, P. (2010, Mar 3). The New Demographics of Mechanical Turk. http://behind-the-enemy-lines.blogspot.com/2010/03/new-demographics-of-mechanical-turk.html. Retrieved 9--21--20Google ScholarGoogle Scholar
  11. Isard, M., Budiu, M., Yu, Y., Birrell, A., and Fetterly, D. Dryad: distributed data-parallel programs from sequential building blocks. Proceedings of the 2nd ACM SIGOPS/EuroSys European Conference on Computer Systems 2007, (2007), 59--72. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Heer, J. and Bostock, M. Crowdsourcing graphical perception: using mechanical turk to assess visualization design. In Proceedings of CHI (2010). ACM, New York, 203--212. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Horton, J. J., Rand, D.G. and Zeckhauser, R.J., The online laboratory: Conducting experiments in a real labor market (2010). NBER Working Paper Series, Vol. w15961.Google ScholarGoogle Scholar
  14. Kittur, A., Chi, E., Suh, B. Crowdsourcing User Studies With Mechanical Turk. In Proceedings of CHI (2008). Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Kittur, A., Lee, B., Kraut, R. E. Coordination in Collective Intelligence: The Role of Team Structure and Task Interdependence. In Proceedings of CHI (2009). ACM, New York. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Kittur, A. and Kraut, R. E. Harnessing the wisdom of crowds in wikipedia: quality through coordination. In Proceedings of CSCW (2008). ACM, New York. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Little, G., Chilton, L.B., Goldman, M., and Miller, R.C. Turkit: human computation algorithms on mechanical turk. In Proceedings of UIST (2010), ACM, New York, 57--66. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Malone, T. and Crowston, K. The interdisciplinary study of coordination. ACM Computing Surveys, 26, (1994), 87--119. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Malone, T., Yates, J. and Benjamin, R. Electronic markets and electronic hierarchies. Communications of the ACM, 30 (6). 484--497. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Mintzberg, H. 1979. The Structuring of Organizations. Prentice-Hall, Englewood Cliffs, N.J.Google ScholarGoogle Scholar
  21. Olston, C., Reed, B., Srivastava, U., Kumar, R., and Tomkins, A. Pig latin: a not-so-foreign language for data processing. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, (2008), 1099--1110. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Salganik, M.J., Dodds, P.S., & Watts, D.J. Experimental study of inequality and unpredictability in an artificial cultural market. Science, 311 (2006), 854.Google ScholarGoogle ScholarCross RefCross Ref
  23. Snow, R., O'Connor, B., Jurafsky, D., and Ng, A.Y. Cheap and fast--but is it good?: evaluating non-expert annotations for natural language tasks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics (2008), 254--263. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Skillicorn, D.B., Talia, D. Models and languages for parallel computation, ACM Computing Surveys 30(2) (1998) 123--169. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Sorokin, A., Forsyth, D. Utility data annotation with Amazon Mechanical Turk, First IEEE Workshop on Internet Vision at CVPR (2008).Google ScholarGoogle ScholarCross RefCross Ref
  26. Van de Ven, A., Delbecq, A. and Koenig, R. Determinants of coordination modes within organizations. American Sociological Review, 41 (1976), 322--338.Google ScholarGoogle ScholarCross RefCross Ref
  27. Williamson, O. Transaction-Cost Economics: The Governance of Contractual Relations. The Journal of Law and Economics, 22 (2). 233.Google ScholarGoogle ScholarCross RefCross Ref
  28. Wikipedia Simple entry on New York City, http://simple.wikipedia.org/wiki/New_York_CityGoogle ScholarGoogle Scholar
  29. Woods, D. & Cook, R. 1999. Perspectives on human error: Hindsight biases and local rationality. In F. Durso (Ed.), Handbook of applied cognition. NY: John Wiley.Google ScholarGoogle Scholar

Index Terms

  1. CrowdForge: crowdsourcing complex work

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        UIST '11: Proceedings of the 24th annual ACM symposium on User interface software and technology
        October 2011
        654 pages
        ISBN:9781450307161
        DOI:10.1145/2047196

        Copyright © 2011 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 16 October 2011

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        UIST '11 Paper Acceptance Rate67of262submissions,26%Overall Acceptance Rate842of3,967submissions,21%

        Upcoming Conference

        UIST '24

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader