skip to main content
10.1145/2232817.2232840acmconferencesArticle/Chapter ViewAbstractPublication PagesjcdlConference Proceedingsconference-collections
research-article

Categorization of computing education resources with utilization of crowdsourcing

Published:10 June 2012Publication History

ABSTRACT

The Ensemble Portal harvests resources from multiple heterogeneous federated collections. Managing these dynamically increasing collections requires an automatic mechanism to categorize records in to corresponding topics. We propose an approach to use existing ACM DL metadata to build classifiers for harvested resources in the Ensemble project. We also present our experience with utilizing the Amazon Mechanical Turk platform to build ground truth training data sets from Ensemble collections.

References

  1. Sebastiani, F. (2002). Machine learning in automated text categorization. ACM Comput. Surv.34, 1, 1--47. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Jain, A.K., Murty, M.N., and Flynn, P.J. (1999). Data clustering: a review. ACM Comput. Surv. 31, 3, 264--323. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Chen, G., Warren, J., and Riddle,P. (2010). Semantic Space models for classification of consumer webpages on metadata attributes. J. of Biomedical Informatics 43, 5, 725--735. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Meyer, M., Rensing, C., and Steinmetz, R. (2008). Using community-generated contents as a substitute corpus for metadata generation. Int. J. Adv. Media Comm. 2, 1, 59--72. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Kittur, A., Chi, E. H., & Suh, B. (2008). Crowdsourcing user studies with Mechanical Turk. In Proc. of CHI 08. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Mason, W., & Suri, S. (2010). Conducting Behavioral Research on Amazon's Mechanical Turk. Behavior Research Methods, 5(5), 1--23.Google ScholarGoogle Scholar
  7. Chen, J. J., Menezes, N. J., Bradley, A. D., & North, T. A. (2011). Opportunities for Crowdsourcing Research on Amazon Mechanical Turk. Human Factors, 5, 3.Google ScholarGoogle Scholar
  8. Yetisgen-yildiz, M., Solti, I., Xia, F., & Halgrim, S. R. (2010). Preliminary Experiments with Amazon's Mechanical Turk for Annotating Medical Named Entities. Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk, 180--183. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Categorization of computing education resources with utilization of crowdsourcing

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in
            • Published in

              cover image ACM Conferences
              JCDL '12: Proceedings of the 12th ACM/IEEE-CS joint conference on Digital Libraries
              June 2012
              458 pages
              ISBN:9781450311540
              DOI:10.1145/2232817

              Copyright © 2012 ACM

              Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

              Publisher

              Association for Computing Machinery

              New York, NY, United States

              Publication History

              • Published: 10 June 2012

              Permissions

              Request permissions about this article.

              Request Permissions

              Check for updates

              Qualifiers

              • research-article

              Acceptance Rates

              Overall Acceptance Rate415of1,482submissions,28%

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader