skip to main content
10.1145/2914586.2914638acmconferencesArticle/Chapter ViewAbstractPublication PageshtConference Proceedingsconference-collections
short-paper

Issue-Focused Documentaries versus Other Films: Rating and Type Prediction based on User-Authored Reviews

Published: 10 July 2016 Publication History

Abstract

User-authored reviews offer a window into micro-level engagement with issue-focused documentary films, which is a critical yet insufficiently understood topic in media impact assessment. Based on our data, features, and supervised learning method, we find that ratings of non-documentary (feature film) reviews can be predicted with higher accuracy (73.67%, F1 score) than ratings of documentary reviews (68.05%). We also constructed a classifier that separates reviews of documentaries from reviews of feature films with an accuracy of 71.32%. However, as our goal with this paper is not to improve the accuracy of predicting the rating and type or genre of film reviews, but to advance our understanding of the perception of documentaries in comparison to feature films, we also identified commonalities and differences between these two types of films as well as between low versus high ratings. We find that in contrast to reviews of feature films, comments on documentaries are shorter but composed of longer sentences, are less emotional, contain less positive and more negative terms, are lexically more concise, and are more focused on verbs than on nouns and adjectives. Compared to low-rated reviews, comments with a high rating are shorter, are more emotional and contain more positive than negative sentiment, and have less question marks and more exclamation points. Overall, this work contributes to advancing our understanding of the impact of different types of information products on individual information consumers.

References

[1]
Adomavicius, G. and Kwon, Y., 2007. New recommendation techniques for multicriteria rating systems. Intelligent Systems, IEEE 22, 3, 48--55.
[2]
Bernard, H.R., 2012. Social Research Methods: Qualitative and Quantitative Approaches. Sage.
[3]
Britdoc, The end of the line. A social impact evaluation. http://animatingdemocracy.org/resource/end-line-social-impact-evaluation.
[4]
Campbell, G.M., Buckhoff, M., and Dowell, J.A., Transition Words. https://msu.edu/~jdowell/135/transw.html.
[5]
Chaovalit, P. and Zhou, L., 2005. Movie review mining: A comparison between supervised and unsupervised classification approaches. In Proceedings of the 38th Annual Hawaii International Conference on System Sciences, (HICSS'05). IEEE, 112c.
[6]
Chattoo, C.B., 2014. Assessing the Social Impact of Issue-Focused Documentaries: Research Methods and Future Considerations. Center for Media and Social Impact, School of Communication at American University.
[7]
Clark, J. and Abrash, B., 2011. Social Justice Documentary: Designing for Impact. Center for Social Media, School of Communication at American University http://www.centerforsocialmedia.org/designing-impact.
[8]
Cui, H., Mittal, V., and Datar, M., 2006. Comparative experiments on sentiment classification for online product reviews. In Proceedings of the 21st International Conference on Artificial intelligence, (AAAI'06). 1265--1270.
[9]
De Albornoz, J.C., Plaza, L., Gervás, P., and Díaz, A., 2011. A Joint Model of Feature Mining and Sentiment Analysis for Product Review Rating. In Advances in Information Retrieval. Springer, Berlin Heidelberg, 55--66.
[10]
Devitt, A. and Ahmad, K., 2007. Sentiment polarity identification in financial news: A cohesion-based approach. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, (ACL'07). Association of Computational Linguistics, 25--27.
[11]
Diesner, J., Kim, J., and Pak, S., 2014. Computational impact assessment of social justice documentaries. Metrics for Measuring Publishing Value: Alternative and Otherwise 17, 3.
[12]
Diesner, J. and Rezapour, R., 2015. Social Computing for Impact Assessment of Social Change Projects. In Proceedings of the International Conference on Social Computing, Behavioral-Cultural Modeling, and Prediction, (SBP'15). Springer, 34--43.
[13]
Diesner, J., Rezapour, R., and Jiang, M., 2016. Assessing public awareness of social justice documentary films based on news coverage versus social media. In Proceedings of the iConference.
[14]
Dimitriadou, E., Hornik, K., Leisch, F., Meyer, D., and Weingessel, A., 2011. Misc functions of the department of statistics (e1071), TU Wien. In R package 1, Version: 1--6.
[15]
Ford Foundation, Just Films. http://www.fordfoundation.org/work/our-grants/justfilms.
[16]
Ganu, G., Elhadad, N., and Marian, A., 2009. Beyond the stars: improving rating predictions using review text content. In Proceedings of the 12th International Workshop on the Web and Databases, (WebDB'09). 1--6.
[17]
Green, D. and Patel, M., 2013. Deepening Engagement for Lasting Impact: A Framework for Masuring Media Performance and Results. John S. and James L. Knight Foundation and Bill & Melinda Gates Foundation.
[18]
Hong, Y., Lu, J., Yao, J., Zhu, Q., and Zhou, G., 2012. What reviews are satisfactory: novel features for automatic helpfulness voting. In Proceedings of the 35th International ACM Conference on Research and Development in Information Retrieval, (SIGIR'12). ACM, 495--504.
[19]
Hu, M. and Liu, B., 2004. Mining and summarizing customer reviews. In Proceedings of the 10th ACM International Conference on Knowledge Discovery and Data Mining, (KDD'04). ACM, 168--177.
[20]
Kim, S.-M., Pantel, P., Chklovski, T., and Pennacchiotti, M., 2006. Automatically assessing review helpfulness. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, (EMNLP'06). Association for Computational Linguistics, 423--430.
[21]
Knight Foundation, 2011. Impact: A Guide to Evaluating Community Information Projects. http://www.knightfoundation.org/publications/impact-practical-guide-evaluating-community-inform.
[22]
Li, S., Zhang, H., Xu, W., Chen, G., and Guo, J., 2010. Exploiting combined multi-level model for document sentiment analysis. In Proceedings of the 20th International Conference on Pattern Recognition, (ICPR'10). IEEE, 4141--4144.
[23]
Liu, C.-L., Hsaio, W.-H., Lee, C.-H., Lu, G.-C., and Jou, E., 2012. Movie rating and review summarization in mobile environment. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews. 42, 3, 397--407.
[24]
Liu, J. and Seneff, S., 2009. Review sentiment scoring via a parse-and-paraphrase paradigm. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, (EMNLP'09). Association for Computational Linguistics, 161--169.
[25]
Liu, Y., Huang, X., An, A., and Yu, X., 2008. Modeling and predicting the helpfulness of online reviews. In Proceedings of the 8th IEEE International Conference on Data Mining, (ICDM'08). 443--452.
[26]
Ly, D.K., Sugiyama, K., Lin, Z., and Kan, M.-Y., 2011. Product review summarization from a deeper perspective. In Proceedings of the 11th Annual International ACM/IEEE Joint Conference on Digital Libraries, (JCDL'11). ACM, 311--314.
[27]
Mcauley, J. and Leskovec, J., 2013. Hidden factors and hidden topics: understanding rating dimensions with review text. In Proceedings of the 7th ACM Conference on Recommender Systems, (RecSys'13). ACM, 165--172.
[28]
Mukherjee, S., Basu, G., and Joshi, S., 2013. Incorporating author preference in sentiment rating prediction of reviews. In Proceedings of the 22nd International Conference on World Wide Web, (WWW'13). ACM, 47--48.
[29]
Napoli, P., 2014. Measuring Media Impact: An Overview of the Field. Media Impact Project, USC Annenberg Norman Lear Center.
[30]
Pang, B. and Lee, L., 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, (ACL'05). Association for Computational Linguistics, 115--124.
[31]
Pang, B., Lee, L., and Vaithyanathan, S., 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the Conference on Empirical Methods in Natural Language Processing-Volume 10, (EMNLP'02). Association for Computational Linguistics, 79--86.
[32]
Qu, L., Ifrim, G., and Weikum, G., 2010. The bag-of-opinions method for review rating prediction from sparse text patterns. In Proceedings of the 23rd International Conference on Computational Linguistics, (COLING'10). Association for Computational Linguistics, 913--921.
[33]
Rose, F., 2012. The Art of Immersion: How the Digital Generation Is Remaking Hollywood, Madison Avenue, and the Way We Tell Stories. W.W. Norton & Company, New York, NY.
[34]
Socher, R., Bauer, J., Manning, C.D., and Ng, A.Y., 2013. Parsing with compositional vector grammars. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, (ACL'13). Association for Computational Linguistics, 455--465.
[35]
Taboada, M., Brooke, J., Tofiloski, M., Voll, K., and Stede, M., 2011. Lexicon-based methods for sentiment analysis. Computational Linguistics. 37, 2, 267--307.
[36]
Tang, D., Qin, B., Liu, T., and Yang, Y., 2015. User modeling with neural network for review rating prediction. In Proceedings of the 24th International Joint Conference on Artificial Intelligence, (IJCAT'15). 1340--1346.
[37]
Turney, P.D., 2002. Thumbs up or thumbs down?: semantic orientation applied to unsupervised classification of reviews. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, (ACL'02). Association for Computational Linguistics, 417--417.
[38]
Weaver, W. and Shannon, C.E., 1949. The Mathematical Theory of Communication. University of Illinois Press, Urbana, Illinois.
[39]
Wilson, T., Wiebe, J., and Hoffmann, P., 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, (HLT'05). Association for Computational Linguistics, 347--354.

Cited By

View all
  • (2023)Complexities of leveraging user-generated book reviews for scholarly research: transiency, power dynamics, and cultural dependencyInternational Journal on Digital Libraries10.1007/s00799-023-00376-z25:2(317-340)Online publication date: 31-Jul-2023
  • (2023)Research with User-Generated Book Review Data: Legal and Ethical Pitfalls and Contextualized MitigationsInformation for a Better World: Normality, Virtuality, Physicality, Inclusivity10.1007/978-3-031-28035-1_13(163-186)Online publication date: 10-Mar-2023
  • (2022)Complexities associated with user-generated book reviews in digital librariesProceedings of the 22nd ACM/IEEE Joint Conference on Digital Libraries10.1145/3529372.3530930(1-12)Online publication date: 20-Jun-2022

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
HT '16: Proceedings of the 27th ACM Conference on Hypertext and Social Media
July 2016
354 pages
ISBN:9781450342476
DOI:10.1145/2914586
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 10 July 2016

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. documentary films
  2. genre prediction
  3. rating prediction
  4. social impact

Qualifiers

  • Short-paper

Funding Sources

  • Ford Foundation

Conference

HT '16
Sponsor:
HT '16: 27th ACM Conference on Hypertext and Social Media
July 10 - 13, 2016
Nova Scotia, Halifax, Canada

Acceptance Rates

HT '16 Paper Acceptance Rate 16 of 54 submissions, 30%;
Overall Acceptance Rate 378 of 1,158 submissions, 33%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)0
Reflects downloads up to 19 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Complexities of leveraging user-generated book reviews for scholarly research: transiency, power dynamics, and cultural dependencyInternational Journal on Digital Libraries10.1007/s00799-023-00376-z25:2(317-340)Online publication date: 31-Jul-2023
  • (2023)Research with User-Generated Book Review Data: Legal and Ethical Pitfalls and Contextualized MitigationsInformation for a Better World: Normality, Virtuality, Physicality, Inclusivity10.1007/978-3-031-28035-1_13(163-186)Online publication date: 10-Mar-2023
  • (2022)Complexities associated with user-generated book reviews in digital librariesProceedings of the 22nd ACM/IEEE Joint Conference on Digital Libraries10.1145/3529372.3530930(1-12)Online publication date: 20-Jun-2022

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media