skip to main content
10.1145/3301275.3302289acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
short-paper
Open Access
Best Short Paper

The effects of example-based explanations in a machine learning interface

Published:17 March 2019Publication History

ABSTRACT

The black-box nature of machine learning algorithms can make their predictions difficult to understand and explain to end-users. In this paper, we propose and evaluate two kinds of example-based explanations in the visual domain, normative explanations and comparative explanations (Figure 1), which automatically surface examples from the training set of a deep neural net sketch-recognition algorithm. To investigate their effects, we deployed these explanations to 1150 users on QuickDraw, an online platform where users draw images and see whether a recognizer has correctly guessed the intended drawing. When the algorithm failed to recognize the drawing, those who received normative explanations felt they had a better understanding of the system, and perceived the system to have higher capability. However, comparative explanations did not always improve perceptions of the algorithm, possibly because they sometimes exposed limitations of the algorithm and may have led to surprise. These findings suggest that examples can serve as a vehicle for explaining algorithmic behavior, but point to relative advantages and disadvantages of using different kinds of examples, depending on the goal.

References

  1. Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y Lim, and Mohan Kankanhalli. 2018. Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 582. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Carrie J Cai. 2013. Adapting arcade games for learning. In CHI'13 Extended Abstracts on Human Factors in Computing Systems. ACM, 2665--2670. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Carrie J Cai, Emily Reif, Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov, Martin Wattenberg, Fernanda Viegas, Greg S Corrado, Martin C Stumpe, and Michael Terry. 2019. Refinement Tools for Coping with Imperfect Algorithms during Medical Decision-Making. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Matteo Colombo, Leandra Bucher, and Jan Sprenger. 2017. Determinants of judgments of explanatory power: Credibility, Generality, and Statistical Relevance. Frontiers in psychology 8 (2017), 1430.Google ScholarGoogle Scholar
  5. Malin Eiband, Hanna Schneider, Mark Bilandzic, Julian Fazekas-Con, Mareike Haug, and Heinrich Hussmann. 2018. Bringing Transparency Design into Practice. In 23rd International Conference on Intelligent User Interfaces. ACM, 211--223. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2009. Visualizing higher-layer features of a deep network. University of Montreal 1341, 3 (2009), 1.Google ScholarGoogle Scholar
  7. Motahhare Eslami, Karrie Karahalios, Christian Sandvig, Kristen Vaccaro, Aimee Rickman, Kevin Hamilton, and Alex Kirlik. 2016. First i like it, then i hide it: Folk theories of social feeds. In Proceedings of the 2016 cHI conference on human factors in computing systems. ACM, 2371--2382. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Motahhare Eslami, Sneha R Krishna Kumaran, Christian Sandvig, and Karrie Karahalios. 2018. Communicating Algorithmic Process in Online Behavioral Advertising. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 432. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Bryce Goodman and Seth Flaxman. 2016. European Union regulations on algorithmic decision-making and a" right to explanation". arXiv preprint arXiv:1606.08813 (2016).Google ScholarGoogle Scholar
  10. Jonathan L Herlocker, Joseph A Konstan, and John Riedl. 2000. Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM conference on Computer supported cooperative work. ACM, 241--250. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Daniel Keysers, Thomas Deselaers, Henry A Rowley, Li-Lun Wang, and Victor Carbune. 2017. Multi-Language Online Handwriting Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 39, 6 (2017), 1180--1194. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. 2018. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International Conference on Machine Learning. 2673--2682.Google ScholarGoogle Scholar
  13. René F Kizilcec. 2016. How much information?: Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 2390--2395. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. arXiv preprint arXiv:1703.04730 (2017).Google ScholarGoogle Scholar
  15. Todd Kulesza, Simone Stumpf, Margaret Burnett, and Irwin Kwan. 2012. Tell me more?: the effects of mental model soundness on personalizing an intelligent agent. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1--10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Brian Y Lim and Anind K Dey. 2009. Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th international conference on Ubiquitous computing. ACM, 195--204. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Brian Y Lim, Anind K Dey, and Daniel Avrahami. 2009. Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2119--2128. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Bertram F Malle. 2011. Attribution theories: How people make sense of behavior. Theories in social psychology 23 (2011), 72--95.Google ScholarGoogle Scholar
  19. David Martens and Foster Provost. 2013. Explaining data-driven document classifications. (2013).Google ScholarGoogle Scholar
  20. Roger C Mayer, James H Davis, and F David Schoorman. 1995. An integrative model of organizational trust. Academy of management review 20, 3 (1995), 709--734.Google ScholarGoogle Scholar
  21. Tim Miller. 2017. Explanation in artificial intelligence: insights from the social sciences. arXiv preprint arXiv:1706.07269 (2017).Google ScholarGoogle Scholar
  22. Pearl Pu and Li Chen. 2006. Trust building with explanation interfaces. In Proceedings of the 11th international conference on Intelligent user interfaces. ACM, 93--100. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Emilee Rader, Kelley Cotter, and Janghee Cho. 2018. Explanations as Mechanisms for Supporting Algorithmic Transparency. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 103. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Stephen J Read and Amy Marcus-Newhall. 1993. Explanatory coherence in social explanations: A parallel distributed processing account. Journal of Personality and Social Psychology 65, 3 (1993), 429.Google ScholarGoogle ScholarCross RefCross Ref
  25. Alexander Renkl. 2014. Toward an instructionally oriented theory of example-based learning. Cognitive science 38, 1 (2014), 1--37.Google ScholarGoogle Scholar
  26. Alexander Renkl, Tatjana Hilbert, and Silke Schworm. 2009. Example-based learning in heuristic domains: A cognitive load theory account. Educational Psychology Review 21, 1 (2009), 67--78.Google ScholarGoogle ScholarCross RefCross Ref
  27. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, 1135--1144. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High-precision model-agnostic explanations. In AAAI Conference on Artificial Intelligence.Google ScholarGoogle ScholarCross RefCross Ref
  29. James Schaffer, Prasanna Giridhar, Debra Jones, Tobias Höllerer, Tarek Abdelzaher, and John O'Donovan. 2015. Getting the message?: A study of explanation interfaces for microblog data analysis. In Proceedings of the 20th International Conference on Intelligent User Interfaces. ACM, 345--356. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Kelly G Shaver. 2012. The attribution of blame: Causality, responsibility, and blameworthiness. Springer Science & Business Media.Google ScholarGoogle Scholar
  31. Muzafer Sherif. 1936. The psychology of social norms. (1936).Google ScholarGoogle Scholar
  32. Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. 2017. Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825 (2017).Google ScholarGoogle Scholar
  33. Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. (2017).Google ScholarGoogle Scholar

Index Terms

  1. The effects of example-based explanations in a machine learning interface

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      IUI '19: Proceedings of the 24th International Conference on Intelligent User Interfaces
      March 2019
      713 pages
      ISBN:9781450362726
      DOI:10.1145/3301275

      Copyright © 2019 Owner/Author

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 17 March 2019

      Check for updates

      Qualifiers

      • short-paper

      Acceptance Rates

      IUI '19 Paper Acceptance Rate71of282submissions,25%Overall Acceptance Rate746of2,811submissions,27%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader