skip to main content
research-article

Commonsense Knowledge in Machine Intelligence

Authors Info & Claims
Published:22 February 2018Publication History
Skip Abstract Section

Abstract

There is growing conviction that the future of computing depends on our ability to exploit big data on theWeb to enhance intelligent systems. This includes encyclopedic knowledge for factual details, common sense for human-like reasoning and natural language generation for smarter communication. With recent chatbots conceivably at the verge of passing the Turing Test, there are calls for more common sense oriented alternatives, e.g., the Winograd Schema Challenge. The Aristo QA system demonstrates the lack of common sense in current systems in answering fourth-grade science exam questions. On the language generation front, despite the progress in deep learning, current models are easily confused by subtle distinctions that may require linguistic common sense, e.g.quick food vs. fast food. These issues bear on tasks such as machine translation and should be addressed using common sense acquired from text. Mining common sense from massive amounts of data and applying it in intelligent systems, in several respects, appears to be the next frontier in computing. Our brief overview of the state of Commonsense Knowledge (CSK) in Machine Intelligence provides insights into CSK acquisition, CSK in natural language, applications of CSK and discussion of open issues. This paper provides a report of a tutorial at a recent conference with a brief survey of topics.

References

  1. Gabor Angeli and Christopher D. Manning. Naturalli: Natural logic inference for common sense reasoning. In EMNLP, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  2. Antoine Bordes and Evgeniy Gabrilovich. Constructing and Mining Web-scale Knowledge Graphs. In KDD Tutorials, 2014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Xinlei Chen, Abhinav Shrivastava, and Abhinav Gupta. Neil: Extracting visual knowledge from web data. In ICCV, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Peter Clark, Niranjan Balasubramanian, Sumithra Bhakthavatsalam, Kevin Humphreys, Jesse Kinkead, Ashish Sabharwal, and Oyvind Tafjord. Automatic construction of inference-supporting knowledge bases. In AKBC '14, 2014.Google ScholarGoogle Scholar
  5. William Cohen. Fast effective rule induction. In ICML, 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Daniel Dahlmeier and Hwee Ton Ng. Correcting semantic collocation errors with l1-induced paraphrases. In EMNLP, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Bhavana Dalvi, Niket Tandon, and Peter Clark. Domain-targeted, high precision knowledge extraction. TACL, 2017.Google ScholarGoogle Scholar
  8. Ranjay Krishna et. al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. IJCV, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Christiane Fellbaum, editor. WordNet: An Electronic Lexical Database. MIT Press, 1998.Google ScholarGoogle ScholarCross RefCross Ref
  10. Andrea Frome, Gregory S. Corrado, Jonathon Shlens, Samy Bengio, Jeffrey Dean, Marc'Aurelio Ranzato, and Tomas Mikolov. Devise: A deep visual-semantic embedding model. In NIPS, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Yoko Futagi, Paul Deane, Martin Chodorow, and Joel Tetreault. A computational approach to detecting collocation errors in the writing of non-native speakers of english. Computer Assisted Language Learning, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  12. Yoshua Bengio Geoffrey Hinton. Cifar ncap - summer school 2014.Google ScholarGoogle Scholar
  13. Jonathan Gordon and Benjamin Van Durme. Reporting bias and knowledge acquisition. In AKBC, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Hugo Liu and Pushpak Singh. ConceptNet: a practical commonsense reasoning toolkit. BT Technology Journal, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Quan Liu, Hui Jiang, Zhen-Hua Ling, Xiao-Dan Zhu, Si Wei, and Yu Hu. Combing context and commonsense knowledge through neural networks for solving winograd schema problems. CoRR, 2016.Google ScholarGoogle Scholar
  16. C. Matuszek, M. Witbrock, R.C. Kahlert, J. Cabral, D. Schneider, P. Shah, and D. Lenat. Searching for common sense: Populating Cyc from the Web. In AAAI, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Technical University of Vienna. European smart cities: Technical report, August 2015.Google ScholarGoogle Scholar
  18. Taehyun Park, Edward Lank, Pascal Poupart, and Michael Terry. Is the sky pure today - awkchecker: An assistive tool for detecting and correcting collocation errors. In Proceedings of the ACM Symposium on User Interface Software and Technology, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Priya Persaud, Aparna Varde, and Stefan Robila. Enhancing autonomous vehicles with commonsense: Smart mobility in smart cities. In IEEE ICTAI workshop on Smart Cities, 2017.Google ScholarGoogle ScholarCross RefCross Ref
  20. R. Schank and R. Abelson. Scripts, plans, goals and understanding: An inquiry into human knowledge structures. Lawrence Erlbaum Associates, Hillsdale, NJ., 1977.Google ScholarGoogle Scholar
  21. Niket Tandon. Commonsense knowledge acquisition and applications. In Doctoral dissertation, 2016.Google ScholarGoogle Scholar
  22. Niket Tandon, Gerard de Melo, Fabian Suchanek, and Gerhard Weikum. Webchild: Harvesting and organizing commonsense knowledge from the web. In WSDM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Alan Varghese, Aparna Varde, Jing Peng, and Eileen Fitzpatrick. A framework for collocation error correction in web pages and text documents. ACM SIGKDD Explorations, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Luis von Ahn, Mihir Kedia, and Manuel Blum. Verbosity: a game for collecting common-sense facts. In CHI, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Peng Wang, Qi Wu, Chunhua Shen, Anton van den Hengel, and Anthony R. Dick. Fvqa: Fact-based visual question answering. TPAMI '17.Google ScholarGoogle Scholar
  26. Bishan Yang and Tom Mitchell. Leveraging Knowledge Bases in LSTMs for Improving Machine Reading. In ACL, 2017.Google ScholarGoogle ScholarCross RefCross Ref

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in

Full Access

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader