ABSTRACT
This paper introduces architectural and interaction patterns for integrating crowdsourced human contributions directly into user interfaces. We focus on writing and editing, complex endeavors that span many levels of conceptual and pragmatic activity. Authoring tools offer help with pragmatics, but for higher-level help, writers commonly turn to other people. We thus present Soylent, a word processing interface that enables writers to call on Mechanical Turk workers to shorten, proofread, and otherwise edit parts of their documents on demand. To improve worker quality, we introduce the Find-Fix-Verify crowd programming pattern, which splits tasks into a series of generation and review stages. Evaluation studies demonstrate the feasibility of crowdsourced editing and investigate questions of reliability, cost, wait time, and work time for edits.
Supplemental Material
- }}Bernstein, M., Marcus, A., Karger, D. R., and Miller, R. C. Enhancing Directed Content Sharing on the Web. CHI '10, ACM Press (2010). Google ScholarDigital Library
- }}Bernstein, M., Tan, D., Smith, G., Czerwinski, M., et al. Collabio: A Game for Annotating People within Social Networks. UIST '09, ACM Press (2009), 177--180. Google ScholarDigital Library
- }}Bigham, J. P., Jayant, C., Ji, H., Little, G., et al. VizWiz: Nearly Real-time Answers to Visual Questions. UIST '10, ACM Press (2010). Google ScholarDigital Library
- }}Clarke, J. and Lapata, M. Models for sentence compression: a comparison across domains, training requirements and evaluation measures. ACL '06, Association for Computational Linguistics (2006). Google ScholarDigital Library
- }}Cohn, T. and Lapata, M. Sentence compression beyond word deletion. COLING '08, (2008). Google ScholarDigital Library
- }}Cypher, A. Watch What I Do. MIT Press, Cambridge, MA, 1993.Google Scholar
- }}Dourish, P. and Bellotti, V. Awareness and coordination in shared workspaces. CSCW '92, ACM Press (1992). Google ScholarDigital Library
- }}Evans, B. and Chi, E. Towards a model of understanding social search. CSCW '08, ACM Press (2008). Google ScholarDigital Library
- }}Hartmann, B., MacDougall, D., Brandt, J., and Klemmer, S. What Would Other Programmers Do? Suggesting Solutions to Error Messages. CHI '10, ACM Press (2010). Google ScholarDigital Library
- }}Heer, J. and Bostock, M. Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visualization Design. CHI '10, ACM Press (2010). Google ScholarDigital Library
- }}Kittur, A., Chi, E. H., and Suh, B. Crowdsourcing user studies with Mechanical Turk. CHI '08, ACM Press (2008). Google ScholarDigital Library
- }}Knight, K. and Marcu, D. Summarization beyond sentence extraction: a probabilistic approach to sentence compression. Artificial Intelligence 139, 1 (2002). Google ScholarDigital Library
- }}Krieger, M., Stark, E. M., and Klemmer, S. R. Coordinating tasks on the commons: designing for personal goals, expertise and serendipity. CHI '09, ACM Press (2009). Google ScholarDigital Library
- }}Kukich, K. Techniques for automatically correcting words in text. ACM Computing Surveys (CSUR) 24, 4 (1992). Google ScholarDigital Library
- }}Lieberman, H. Your Wish is My Command. Morgan Kaufmann, San Francisco, 2001.Google Scholar
- }}Little, G., Chilton, L., Goldman, M., and Miller, R. C. TurKit: Human Computation Algorithms on Mechanical Turk. UIST '10, ACM Press (2010). Google ScholarDigital Library
- }}Little, G., Lau, T. A., Cypher, A., Lin, J., et al. Koala: Capture, Share, Automate, Personalize Business Processes on the Web. CHI '07, (2007). Google ScholarDigital Library
- }}Marcu, D. The Theory and Practice of Discourse Parsing and Summarization. MIT Press, 2000. Google ScholarDigital Library
- }}Mason, W. and Watts, D. Financial Incentives and the Performance of Crowds. ACM SIGKDD Workshop on Human Computation, ACM Press (2009). Google ScholarDigital Library
- }}Miller, R. and Myers, B. Interactive simultaneous editing of multiple text regions. USENIX '01, (2001). Google ScholarDigital Library
- }}Quinn, A. J. and Bederson, B. B. A Taxonomy of Distributed Human Computation.Google Scholar
- }}Ross, J., Irani, L., Silberman, M. S., Zaldivar, A., et al. Who Are the Crowdworkers? Shifting Demographics in Amazon Mechanical Turk. alt.chi '10, ACM Press. Google ScholarDigital Library
- }}Sala, M., Partridge, K., Jacobson, L., and Begole, J. An Exploration into Activity-Informed Physical Advertising Using PEST. Pervasive '07, Springer Berlin Heidelberg (2007). Google ScholarDigital Library
- }}Simon, I., Morris, D., and Basu, S. MySong: automatic accompaniment generation for vocal melodies. Proc. CHI '08, ACM Press (2008). Google ScholarDigital Library
- }}Snow, R., O'Connor, B., Jurafsky, D., and Ng, A. Y. Cheap and fast - but is it good?: evaluating non-expert annotations for natural language tasks. ACL '08, (2008). Google ScholarDigital Library
- }}Sorokin, A. and Forsyth, D. Utility data annotation with Amazon Mechanical Turk. CVPR '08, (2008).Google ScholarCross Ref
- }}von Ahn, L. and Dabbish, L. Labeling images with a computer game. CHI '04, ACM Press (2004). Google ScholarDigital Library
Index Terms
- Soylent: a word processor with a crowd inside
Recommendations
Understanding the Microtask Crowdsourcing Experience for Workers with Disabilities: A Comparative View
CSCWMicrotask crowdsourcing holds great potential as an employment opportunity with the flexibility and anonymity that individuals with disability may require. Though prior research has explored the accessibility of crowd work, the lived crowd work ...
Who's the boss?: requester transparency and motivation in a microtask marketplace
CHI EA '14: CHI '14 Extended Abstracts on Human Factors in Computing SystemsWorkers on crowdsourcing platforms such as Amazon's Mechanical Turk often receive little to no information about who is requesting the task they are asked to perform. This can lead to psychological distance and reduced work quality as a result. In this ...
Who are the crowdworkers?: shifting demographics in mechanical turk
CHI EA '10: CHI '10 Extended Abstracts on Human Factors in Computing SystemsAmazon Mechanical Turk (MTurk) is a crowdsourcing system in which tasks are distributed to a population of thousands of anonymous workers for completion. This system is increasingly popular with researchers and developers. Here we extend previous ...
Comments