1 research outputs found
Toward automatic comparison of visualization techniques: Application to graph visualization
Many end-user evaluations of data visualization techniques have been run
during the last decades. Their results are cornerstones to build efficient
visualization systems. However, designing such an evaluation is always complex
and time-consuming and may end in a lack of statistical evidence and
reproducibility. We believe that modern and efficient computer vision
techniques, such as deep convolutional neural networks (CNNs), may help
visualization researchers to build and/or adjust their evaluation hypothesis.
The basis of our idea is to train machine learning models on several
visualization techniques to solve a specific task. Our assumption is that it is
possible to compare the efficiency of visualization techniques based on the
performance of their corresponding model. As current machine learning models
are not able to strictly reflect human capabilities, including their
imperfections, such results should be interpreted with caution. However, we
think that using machine learning-based pre-evaluation, as a pre-process of
standard user evaluations, should help researchers to perform a more exhaustive
study of their design space. Thus, it should improve their final user
evaluation by providing it better test cases. In this paper, we present the
results of two experiments we have conducted to assess how correlated the
performance of users and computer vision techniques can be. That study compares
two mainstream graph visualization techniques: node-link (\NL) and
adjacency-matrix (\MD) diagrams. Using two well-known deep convolutional neural
networks, we partially reproduced user evaluations from Ghoniem \textit{et al.}
and from Okoe \textit{et al.}. These experiments showed that some user
evaluation results can be reproduced automatically.Comment: 35 pages, 6 figures, 4 table