1,711 research outputs found
Measuring Categorical Perception in Color-Coded Scatterplots
Scatterplots commonly use color to encode categorical data. However, as
datasets increase in size and complexity, the efficacy of these channels may
vary. Designers lack insight into how robust different design choices are to
variations in category numbers. This paper presents a crowdsourced experiment
measuring how the number of categories and choice of color encodings used in
multiclass scatterplots influences the viewers' abilities to analyze data
across classes. Participants estimated relative means in a series of
scatterplots with 2 to 10 categories encoded using ten color palettes drawn
from popular design tools. Our results show that the number of categories and
color discriminability within a color palette notably impact people's
perception of categorical data in scatterplots and that the judgments become
harder as the number of categories grows. We examine existing palette design
heuristics in light of our results to help designers make robust color choices
informed by the parameters of their data.Comment: The paper has been accepted to the ACM CHI 2023. 14 pages, 7 figure
Recommended from our members
Assessing the Graphical Perception of Time and Speed on 2D+Time Trajectories
We empirically evaluate the extent to which people perceive non-constant time and speed encoded on 2D paths. In our graphical perception study, we evaluate nine encodings from the literature for both straight and curved paths. Visualizing time and speed information is a challenge when the x and y axes already encode other data dimensions, for example when plotting a trip on a map. This is particularly true in disciplines such as time-geography and movement analytics that often require visualizing spatio-temporal trajectories. A common approach is to use 2D+time trajectories, which are 2D paths for which time is an additional dimension. However, there are currently no guidelines regarding how to represent time and speed on such paths. Our study results provide InfoVis designers with clear guidance regarding which encodings to use and which ones to avoid; in particular, we suggest using color value to encode speed and segment length to encode time whenever possible
Representing Data Visualization Goals and Tasks through Meta-Modeling to Tailor Information Dashboards
[EN]Information dashboards are everywhere. They support knowledge discovery in a huge
variety of contexts and domains. Although powerful, these tools can be complex, not only for the
end-users but also for developers and designers. Information dashboards encode complex datasets
into different visual marks to ease knowledge discovery. Choosing a wrong design could
compromise the entire dashboard’s effectiveness, selecting the appropriate encoding or
configuration for each potential context, user, or data domain is a crucial task. For these reasons,
there is a necessity to automatize the recommendation of visualizations and dashboard
configurations to deliver tools adapted to their context. Recommendations can be based on different
aspects, such as user characteristics, the data domain, or the goals and tasks that will be achieved or
carried out through the visualizations. This work presents a dashboard meta-model that abstracts
all these factors and the integration of a visualization task taxonomy to account for the different
actions that can be performed with information dashboards. This meta-model has been used to
design a domain specific language to specify dashboards requirements in a structured way. The
ultimate goal is to obtain a dashboard generation pipeline to deliver dashboards adapted to any
context, such as the educational context, in which a lot of data are generated, and there are several
actors involved (students, teachers, managers, etc.) that would want to reach different insights
regarding their learning performance or learning methodologies
Bi-Modality Medical Image Synthesis Using Semi-Supervised Sequential Generative Adversarial Networks
In this paper, we propose a bi-modality medical image synthesis approach
based on sequential generative adversarial network (GAN) and semi-supervised
learning. Our approach consists of two generative modules that synthesize
images of the two modalities in a sequential order. A method for measuring the
synthesis complexity is proposed to automatically determine the synthesis order
in our sequential GAN. Images of the modality with a lower complexity are
synthesized first, and the counterparts with a higher complexity are generated
later. Our sequential GAN is trained end-to-end in a semi-supervised manner. In
supervised training, the joint distribution of bi-modality images are learned
from real paired images of the two modalities by explicitly minimizing the
reconstruction losses between the real and synthetic images. To avoid
overfitting limited training images, in unsupervised training, the marginal
distribution of each modality is learned based on unpaired images by minimizing
the Wasserstein distance between the distributions of real and fake images. We
comprehensively evaluate the proposed model using two synthesis tasks based on
three types of evaluate metrics and user studies. Visual and quantitative
results demonstrate the superiority of our method to the state-of-the-art
methods, and reasonable visual quality and clinical significance. Code is made
publicly available at
https://github.com/hustlinyi/Multimodal-Medical-Image-Synthesis
- …