63 research outputs found

    Are We Closing the Loop Yet? Gaps in the Generalizability of VIS4ML Research

    Full text link
    Visualization for machine learning (VIS4ML) research aims to help experts apply their prior knowledge to develop, understand, and improve the performance of machine learning models. In conceiving VIS4ML systems, researchers characterize the nature of human knowledge to support human-in-the-loop tasks, design interactive visualizations to make ML components interpretable and elicit knowledge, and evaluate the effectiveness of human-model interchange. We survey recent VIS4ML papers to assess the generalizability of research contributions and claims in enabling human-in-the-loop ML. Our results show potential gaps between the current scope of VIS4ML research and aspirations for its use in practice. We find that while papers motivate that VIS4ML systems are applicable beyond the specific conditions studied, conclusions are often overfitted to non-representative scenarios, are based on interactions with a small set of ML experts and well-understood datasets, fail to acknowledge crucial dependencies, and hinge on decisions that lack justification. We discuss approaches to close the gap between aspirations and research claims and suggest documentation practices to report generality constraints that better acknowledge the exploratory nature of VIS4ML research

    Leveraging Citation Networks to Visualize Scholarly Influence Over Time

    Full text link
    Assessing the influence of a scholar's work is an important task for funding organizations, academic departments, and researchers. Common methods, such as measures of citation counts, can ignore much of the nuance and multidimensionality of scholarly influence. We present an approach for generating dynamic visualizations of scholars' careers. This approach uses an animated node-link diagram showing the citation network accumulated around the researcher over the course of the career in concert with key indicators, highlighting influence both within and across fields. We developed our design in collaboration with one funding organization---the Pew Biomedical Scholars program---but the methods are generalizable to visualizations of scholarly influence. We applied the design method to the Microsoft Academic Graph, which includes more than 120 million publications. We validate our abstractions throughout the process through collaboration with the Pew Biomedical Scholars program officers and summative evaluations with their scholars

    Artificial Intelligence and Aesthetic Judgment

    Full text link
    Generative AIs produce creative outputs in the style of human expression. We argue that encounters with the outputs of modern generative AI models are mediated by the same kinds of aesthetic judgments that organize our interactions with artwork. The interpretation procedure we use on art we find in museums is not an innate human faculty, but one developed over history by disciplines such as art history and art criticism to fulfill certain social functions. This gives us pause when considering our reactions to generative AI, how we should approach this new medium, and why generative AI seems to incite so much fear about the future. We naturally inherit a conundrum of causal inference from the history of art: a work can be read as a symptom of the cultural conditions that influenced its creation while simultaneously being framed as a timeless, seemingly acausal distillation of an eternal human condition. In this essay, we focus on an unresolved tension when we bring this dilemma to bear in the context of generative AI: are we looking for proof that generated media reflects something about the conditions that created it or some eternal human essence? Are current modes of interpretation sufficient for this task? Historically, new forms of art have changed how art is interpreted, with such influence used as evidence that a work of art has touched some essential human truth. As generative AI influences contemporary aesthetic judgment we outline some of the pitfalls and traps in attempting to scrutinize what AI generated media means.Comment: 16 pages, 4 figure

    Understanding and Supporting Trade-offs in the Design of Visualizations for Communication.

    Full text link
    A shift in the availability of usable tools and public data has prompted mass manufacturing of information visualizations to communicate data insights to broad audiences. Despite available software, professional and novice creators of visualizations that are intended to communicate data insights to broad audiences may struggle to balance conflicting considerations in design. Studying professional practice suggests that expert visualization designers and analysts negotiate difficult design trade-offs in creating customized visualizations, many of which involve deciding how and how much data to present given a priori design goals. This dissertation presents three studies that demonstrate how studying expert visual design and data modeling practice can advance visualization design tools. Insights from these formative studies inform the development of specific frameworks and algorithms. The first study addresses the often ignored, persuasive dimension of narrative visualizations. The framework I propose characterizes the persuasive dimension of visualization design by providing empirical evidence of several classes of rhetorical design strategies that trade-off comprehensive, unbiased data presentation goals with intentions to persuade users toward intended interpretations. The rhetorical visualization framework highlights a second trade-off: the act of dividing and sequencing information from a multivariate data set in separate visualizations for ordered presentation. I contribute initial evidence of ordering principles that designers apply to ease comprehension and support storytelling goals with a visualization presentation. The principles are used in developing a novel algorithmic approach to supporting designers of visualizations in making decisions related to visualization presentation order and structuring, highlighting the importance of optimizing for both local or “single visualization” design in tandem with global “sequence” design. The final design trade-off concerns how to convey uncertainty to end-users in order to support accurate conclusions despite diverse educational backgrounds. I demonstrate how non-statistician end-users can produce more cautious and at times more accurate estimates of the reliability of data patterns through the use of a comparative sample plots method motivated by statistical resampling approaches to modeling uncertainty. Taken together, my results deepen understanding of the act of designing visualizations for potentially diverse online audiences, and provide tools to support more effective design.PHDInformationUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/107170/1/jhullman_1.pd
    • …
    corecore