Skip to main content
Article thumbnail
Location of Repository

Seven Guiding Scenarios for Information Visualization Evaluation

By Heidi Lam, Enrico Bertini, Petra Isenberg, Catherine Plaisant and Sheelagh CarpendaleHeidi Lam, Enrico Bertini, Petra Isenberg, Catherine Plaisant and Sheelagh Carpendale

Abstract

Abstract—We take a new, scenario based look at evaluation in information visualization. Our seven scenarios, evaluating visual data analysis and reasoning, evaluating user performance, evaluating user experience, evaluating environments and work practices, evaluating communication through visualization, automated evaluation of visualizations, and evaluating collaborative data analysis were derived through an extensive literature review of over 800 visualization publications. These scenarios are described through their goals, the types of questions they embody and illustrated through example studies. Through this broad survey and the distillation of these scenarios we make two contributions. One, we encapsulate the current practices in the information visualization research community and, two, we provide a different approach to reaching decisions about what might be the most effective evaluation of a given information visualization. For example, if the research goals or evaluative questions are known they can be used to map to specific scenarios, where practical existing examples can be considered for effective evaluation approaches. Index Terms—Information visualization, evaluation

Year: 2011
OAI identifier: oai:CiteSeerX.psu:10.1.1.188.3308
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://innovis.cpsc.ucalgary.c... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.