skip to main content
10.1145/2736277.2741097acmotherconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
research-article

Groupsourcing: Team Competition Designs for Crowdsourcing

Published: 18 May 2015 Publication History

Abstract

Many data processing tasks such as semantic annotation of images, translation of texts in foreign languages, and labeling of training data for machine learning models require human input, and, on a large scale, can only be accurately solved using crowd based online work. Recent work shows that frameworks where crowd workers compete against each other can drastically reduce crowdsourcing costs, and outperform conventional reward schemes where the payment of online workers is proportional to the number of accomplished tasks ("pay-per-task"). In this paper, we investigate how team mechanisms can be leveraged to further improve the cost efficiency of crowdsourcing competitions. To this end, we introduce strategies for team based crowdsourcing, ranging from team formation processes where workers are randomly assigned to competing teams, over strategies involving self-organization where workers actively participate in team building, to combinations of team and individual competitions. Our large-scale experimental evaluation with more than 1,100 participants and overall 5,400 hours of work spent by crowd workers demonstrates that our team based crowdsourcing mechanisms are well accepted by online workers and lead to substantial performance boosts.

References

[1]
TREC Crowdsourcing Task. https://sites.google.com/site/treccrowd/home, 2013.
[2]
GamifIR '14: Proceedings of the First International Workshop on Gamification for Information Retrieval, New York, NY, USA, 2014. ACM.
[3]
O. Alonso and R. Baeza-Yates. Design and implementation of relevance assessments using crowdsourcing. In Proceedings of the 33rd European Conference on Advances in Information Retrieval, ECIR'11, pages 153--164, Berlin, Heidelberg, 2011. Springer-Verlag.
[4]
O. Alonso and S. Mizzaro. Using crowdsourcing for trec relevance assessment. Information Processing & Management, 48(6):1053--1066, Nov. 2012.
[5]
N. Archak. Money, glory and cheap talk: Analyzing strategic behavior of contestants in simultaneous crowdsourcing contests on topcoder.com. In Proceedings of the 19th International Conference on World Wide Web, WWW '10, pages 21--30, New York, NY, SA, 2010. ACM.
[6]
N. Archak and A. Sundararajan. Optimal design of crowdsourcing contests. In Proceedings of the International Conference on Information Systems, ICIS 2009, Phoenix, Arizona, USA, 2009. Association for Information Systems.
[7]
R. Cavallo and S. Jain. Efficient crowdsourcing contests. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 2, AAMAS '12, pages 677--686, Richland, SC, 2012. International Foundation for Autonomous Agents and Multiagent Systems.
[8]
R. Cavallo and S. Jain. Winner-take-all crowdsourcing contests with stochastic production. In Proceedings of the First AAAI Conference on Human Computation and Crowdsourcing, Palm Springs, CA, USA, 2013. AAAI.
[9]
D. DiPalantino and M. Vojnovic. Crowdsourcing and all-pay auctions. In Proceedings of the 10th ACM Conference on Electronic Commerce, EC '09, pages 119--128, New York, NY, USA, 2009. ACM.
[10]
C. Eickhoff, C. G. Harris, A. P. de Vries, and P. Srinivasan. Quality through ow and immersion: Gamifying crowdsourced relevance assessments. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '12, pages 871--880, New York, NY, USA, 2012. ACM.
[11]
J. He, M. Bron, L. Azzopardi, and A. de Vries. Studying user browsing behavior through gamified search tasks. In Proceedings of the First International Workshop on Gamification for Information Retrieval, GamifIR '14, pages 49--52, NY, USA, 2014. ACM.
[12]
H. Jiang and S. Matsubara. Improving crowdsourcing efficiency based on division strategy. In Proceedings of the 2012 IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technology, volume 2, pages 425--429, Los Alamitos, CA, USA, 2012. IEEE Computer Society.
[13]
G. Kazai. In search of quality in crowdsourcing for search engine evaluation. In Proceedings of the 33rd European Conference on Advances in Information Retrieval, ECIR'11, pages 165--176, Berlin, Heidelberg, 2011. Springer-Verlag.
[14]
G. Kazai, J. Kamps, and N. Milic-Frayling. Worker types and personality traits in crowdsourcing relevance labels. In Proceedings of the 20th ACM International Conference on Information and Knowledge Management, CIKM '11, pages 1941--1944, New York, NY, USA, 2011. ACM.
[15]
N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar. Attribute and Simile Classifiers for Face Verification. In Proceedings of the 12th IEEE International Conference on Computer Vision, ICCV 2009, pages 365--372, Piscataway, NJ, USA, 2009. IEEE Computer Society.
[16]
W. Mason and D. J. Watts. Financial incentives and the "performance of crowds". SIGKDD Explorations Newsletter, 11(2):100--108, May 2010.
[17]
D. Pothineni, P. Mishra, A. Rasheed, and D. Sundararajan. Incentive design to mould online behavior: A game mechanics perspective. In Proceedings of the First International Workshop on Gamification for Information Retrieval, GamifIR '14, pages 27--32, New York, NY, USA, 2014. ACM.
[18]
M. Rokicki, S. Chelaru, S. Zerr, and S. Siersdorfer. Competitive game designs for improving the cost effectiveness of crowdsourcing. In Proceedings of the 23rd ACM International Conference on Information and Knowledge Management, CIKM '14, New York, NY, USA, 2014. ACM.
[19]
N. Savage. Gaining wisdom from crowds. Communications of the ACM, 55(3):13--15, Mar. 2012.
[20]
J. C. Tang, M. Cebrian, N. A. Giacobe, H.-W. Kim, T. Kim, and D. B. Wickert. Reflecting on the darpa red balloon challenge. Commununications of the ACM, 54(4):78--85, Apr. 2011.
[21]
L. von Ahn and L. Dabbish. Labeling images with a computer game. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '04, pages 319--326, New York, NY, USA, 2004. ACM.
[22]
L. von Ahn and L. Dabbish. Designing games with a purpose. Commununications of the ACM, 51(8):58--67, Aug. 2008.
[23]
P. Welinder and P. Perona. Online crowdsourcing: Rating annotators and obtaining cost-effective labels. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on, pages 25--32, June 2010.
[24]
J. Yang, L. A. Adamic, and M. S. Ackerman. Crowdsourcing and knowledge sharing: Strategic user behavior on taskcn. In Proceedings of the 9th ACM Conference on Electronic Commerce, EC '08, pages 246--255, New York, NY, USA, 2008. ACM.
[25]
M.-C. Yuen, I. King, and K.-S. Leung. A survey of crowdsourcing systems. In Privacy, Security, Risk and Trust (PASSAT), IEEE Third International Conference on Social Computing (SocialCom), PASSAT/SocialCom 2011, pages 766--773. IEEE, 2011.

Cited By

View all
  • (2024)Optimizing Collaborative Crowdsensing: A Graph Theoretical Approach to Team Recruitment and Fair Incentive DistributionSensors10.3390/s2410298324:10(2983)Online publication date: 8-May-2024
  • (2024)Predicting the individual effects of team competition on college students’ academic performance in mobile edge computingJournal of Cloud Computing10.1186/s13677-024-00591-213:1Online publication date: 9-Feb-2024
  • (2024)"Are we all in the same boat?" Customizable and Evolving Avatars to Improve Worker Engagement and Foster a Sense of Community in Online Crowd WorkProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642429(1-26)Online publication date: 11-May-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
WWW '15: Proceedings of the 24th International Conference on World Wide Web
May 2015
1460 pages
ISBN:9781450334693

Sponsors

  • IW3C2: International World Wide Web Conference Committee

In-Cooperation

Publisher

International World Wide Web Conferences Steering Committee

Republic and Canton of Geneva, Switzerland

Publication History

Published: 18 May 2015

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. competitions
  2. crowdsourcing
  3. groupsourcing
  4. reward schemes
  5. teams

Qualifiers

  • Research-article

Funding Sources

  • European Research Council
  • European Commission

Conference

WWW '15
Sponsor:
  • IW3C2

Acceptance Rates

WWW '15 Paper Acceptance Rate 131 of 929 submissions, 14%;
Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)39
  • Downloads (Last 6 weeks)1
Reflects downloads up to 19 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Optimizing Collaborative Crowdsensing: A Graph Theoretical Approach to Team Recruitment and Fair Incentive DistributionSensors10.3390/s2410298324:10(2983)Online publication date: 8-May-2024
  • (2024)Predicting the individual effects of team competition on college students’ academic performance in mobile edge computingJournal of Cloud Computing10.1186/s13677-024-00591-213:1Online publication date: 9-Feb-2024
  • (2024)"Are we all in the same boat?" Customizable and Evolving Avatars to Improve Worker Engagement and Foster a Sense of Community in Online Crowd WorkProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642429(1-26)Online publication date: 11-May-2024
  • (2024)Adopting AI teammates in knowledge-intensive crowdsourcing contests: the roles of transparency and explainabilityKybernetes10.1108/K-02-2024-0478Online publication date: 3-Jun-2024
  • (2023)Crowd Wisdom vs in-House Expertise: A Comprehensive Analysis of Quality Assurance ApproachesInternational Journal of Information technology and Computer Engineering10.55529/ijitc.31.36.47(36-47)Online publication date: 29-Jan-2023
  • (2023)How to select crowdsourcing teams with limited information? A heterogeneous information network embedding approachElectronic Commerce Research10.1007/s10660-023-09744-yOnline publication date: 7-Aug-2023
  • (2022)Collaborative Crowdsourced Software TestingElectronics10.3390/electronics1120334011:20(3340)Online publication date: 17-Oct-2022
  • (2022)Crowdsourcing Team Formation With Worker-Centered ModelingFrontiers in Artificial Intelligence10.3389/frai.2022.8185625Online publication date: 27-May-2022
  • (2022)Self-organization in online collaborative work settingsCollective Intelligence10.1177/263391372210780051:1Online publication date: 9-Sep-2022
  • (2022)Harnessing Collective Differences in Crowdsourcing Behaviour for Mass Photogrammetry of 3D Cultural HeritageJournal on Computing and Cultural Heritage 10.1145/356909016:1(1-23)Online publication date: 24-Dec-2022
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media