skip to main content
10.1145/2442576acmotherconferencesBook PagePublication PageschiConference Proceedingsconference-collections
BELIV '12: Proceedings of the 2012 BELIV Workshop: Beyond Time and Errors - Novel Evaluation Methods for Visualization
ACM2012 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
BELIV '12: Beyond Time and Errors - Novel Evaluation Methods for Visualization Seattle Washington USA October 14 - 15, 2012
ISBN:
978-1-4503-1791-7
Published:
14 October 2012
Sponsors:
Google Inc., Microsoft Research
Next Conference
May 11 - 16, 2024
Honolulu , HI , USA
Bibliometrics
Skip Abstract Section
Abstract

Visualization has recently gained much relevance for its ability to cope with complex data analysis tasks and communication. While the overall use of visualizations is accelerating, the growth of specialized techniques for the evaluation of visualization systems has been slow. To understand the complex behaviors involved in analyzing data with visualization, evaluation efforts should be targeted at the component level, the system level, and the work environment level. The commonly used evaluation metrics such as task completion time and number of errors appear often insufficient to quantify the quality of a visualization system; thus the name of the BELIV workshop: "beyond time and errors …"

The BELIV workshop series is a bi-annual event focusing on the challenges of evaluation in visualization. While it has been focused on information visualization in the past, BELIV 2012 aimed at gathering researchers in all fields of visualization to continue the exploration of novel evaluation methods, and to structure the knowledge on evaluation in visualization around a schema, where researchers can easily identify unsolved problems and research gaps.

Skip Table Of Content Section
research-article
Experiences in involving analysts in visualisation design

Involving analysts in visualisation design has obvious benefits, but the knowledge-gap between domain experts ('analysts') and visualisation designers ('designers') often makes the degree of their involvement fall short of that aspired. By promoting a ...

research-article
An integrated approach for evaluating the visualization of intensional and extensional levels of ontologies

Visualization of ontologies is based on effective graphical representations and interaction techniques that support users tasks related to different entities and aspects. Ontologies can be very large and complex due to the levels of classes hierarchy as ...

research-article
Which visualizations work, for what purpose, for whom?: evaluating visualizations of terrestrial and aquatic systems

A need for better ecology visualization tools is well documented, and development of these is underway, including our own NSF funded Visualization of Terrestrial and Aquatic Systems (VISTAS) project, now beginning its second of four years. VISTAS' goal ...

research-article
Toward mixed method evaluations of scientific visualizations and design process as an evaluation tool

In this position paper we discuss successes and limitations of current evaluation strategies for scientific visualizations and argue for embracing a mixed methods strategy of evaluation. The most novel contribution of the approach that we advocate is a ...

research-article
Evaluating visualization using cognitive measures

In this position paper, we discuss the problems and advantages of using physiological measurements to to estimate cognitive load in order to evaluate scientific visualization methods. We will present various techniques and technologies designed to ...

research-article
Towards a 3-dimensional model of individual cognitive differences: position paper

The effects of individual differences on user interaction is a topic that has been explored for the last 25 years in HCI. Recently, the importance of this subject has been carried into the field of information visualization and consequently, there has ...

research-article
Interaction junk: user interaction-based evaluation of visual analytic systems

With the growing need for visualization to aid users in understanding large, complex datasets, the ability for users to interact and explore these datasets is critical. As visual analytic systems have advanced to leverage powerful computational models ...

research-article
Spatial autocorrelation-based information visualization evaluation

A data set can be represented in any number of ways. For example, hierarchical data can be presented as a radial node-link diagram, dendrogram, force-directed layout, or tree map. Alternatively, point-observations can be shown with scatter-plots, ...

research-article
The importance of tracing data through the visualization pipeline

Visualization research focuses either on the transformation steps necessary to create a visualization from data, or on the perception of structures after they have been shown on the screen. We argue that an end-to-end approach is necessary that tracks ...

research-article
Why ask why?: considering motivation in visualization evaluation

My position is that improving evaluation for visualization requires more than developing more sophisticated evaluation methods. It also requires improving the efficacy of evaluations, which involves issues such as how evaluations are applied, reported, ...

research-article
The four-level nested model revisited: blocks and guidelines

We propose an extension to the four-level nested model of design and validation of visualization system that defines the term "guidelines" in terms of blocks at each level. Blocks are the outcomes of the design process at a specific level, and ...

research-article
Patterns for visualization evaluation

We propose a patterns-based approach to evaluating data visualization: a set of general and reusable solutions to commonly occurring problems in evaluating tools, techniques, and systems for visual sensemaking. Patterns have had significant impact in a ...

research-article
A reflection on seven years of the VAST challenge

We describe the evolution of the IEEE Visual Analytics Science and Technology (VAST) Challenge from its origin in 2006 to present (2012). The VAST Challenge has provided an opportunity for visual analytics researchers to test their innovative thoughts ...

research-article
Evaluating analytic performance

In this position paper we propose a performance science approach to evaluation of visual analytics systems.

research-article
How to filter out random clickers in a crowdsourcing-based study?

Crowdsourcing-based user studies have become increasingly popular in information visualization (InfoVis) and visual analytics (VA). However, it is still unclear how to deal with some undesired crowdsourcing workers, especially those who submit random ...

research-article
Questionnaires for evaluation in information visualization

The position taken in this paper is that the availability of standardized questionnaires specifically developed for measuring users' perception of usability in evaluation studies in information visualization would provide the community with an excellent ...

research-article
Methodologies for the analysis of usage patterns in information visualization

In this position paper, we describe two methods for the analysis of sequences of interaction with information visualization tools -- log file analysis and thinking aloud. Such an analysis is valuable because it can help designers to understand cognition ...

Recommendations

Acceptance Rates

Overall Acceptance Rate45of64submissions,70%
YearSubmittedAcceptedRate
BELIV '14302377%
BELIV '10181267%
BELIV '08161063%
Overall644570%