80,845 research outputs found
The State of the Art in Cartograms
Cartograms combine statistical and geographical information in thematic maps,
where areas of geographical regions (e.g., countries, states) are scaled in
proportion to some statistic (e.g., population, income). Cartograms make it
possible to gain insight into patterns and trends in the world around us and
have been very popular visualizations for geo-referenced data for over a
century. This work surveys cartogram research in visualization, cartography and
geometry, covering a broad spectrum of different cartogram types: from the
traditional rectangular and table cartograms, to Dorling and diffusion
cartograms. A particular focus is the study of the major cartogram dimensions:
statistical accuracy, geographical accuracy, and topological accuracy. We
review the history of cartograms, describe the algorithms for generating them,
and consider task taxonomies. We also review quantitative and qualitative
evaluations, and we use these to arrive at design guidelines and research
challenges
Evaluating Cartogram Effectiveness
Cartograms are maps in which areas of geographic regions (countries, states)
appear in proportion to some variable of interest (population, income).
Cartograms are popular visualizations for geo-referenced data that have been
used for over a century and that make it possible to gain insight into patterns
and trends in the world around us. Despite the popularity of cartograms and the
large number of cartogram types, there are few studies evaluating the
effectiveness of cartograms in conveying information. Based on a recent task
taxonomy for cartograms, we evaluate four major different types of cartograms:
contiguous, non-contiguous, rectangular, and Dorling cartograms. Specifically,
we evaluate the effectiveness of these cartograms by quantitative performance
analysis, as well as by subjective preferences. We analyze the results of our
study in the context of some prevailing assumptions in the literature of
cartography and cognitive science. Finally, we make recommendations for the use
of different types of cartograms for different tasks and settings
Interpreting Adversarially Trained Convolutional Neural Networks
We attempt to interpret how adversarially trained convolutional neural
networks (AT-CNNs) recognize objects. We design systematic approaches to
interpret AT-CNNs in both qualitative and quantitative ways and compare them
with normally trained models. Surprisingly, we find that adversarial training
alleviates the texture bias of standard CNNs when trained on object recognition
tasks, and helps CNNs learn a more shape-biased representation. We validate our
hypothesis from two aspects. First, we compare the salience maps of AT-CNNs and
standard CNNs on clean images and images under different transformations. The
comparison could visually show that the prediction of the two types of CNNs is
sensitive to dramatically different types of features. Second, to achieve
quantitative verification, we construct additional test datasets that destroy
either textures or shapes, such as style-transferred version of clean data,
saturated images and patch-shuffled ones, and then evaluate the classification
accuracy of AT-CNNs and normal CNNs on these datasets. Our findings shed some
light on why AT-CNNs are more robust than those normally trained ones and
contribute to a better understanding of adversarial training over CNNs from an
interpretation perspective.Comment: To apper in ICML1
Scan and paint: theory and practice of a sound field visualization method
Sound visualization techniques have played a key role in the development of acoustics throughout history. The development of measurement apparatus and techniques for displaying sound and vibration phenomena has provided excellent tools for building understanding about specific problems. Traditional methods, such as step-by-step measurements or simultaneous multichannel systems, have a strong tradeoff between time requirements, flexibility, and cost. However, if the sound field can be assumed time stationary, scanning methods allow us to assess variations across space with a single transducer, as long as the position of the sensor is known. The proposed technique, Scan and Paint, is based on the acquisition of sound pressure and particle velocity by manually moving a P-U probe (pressure-particle velocity sensors) across a sound field whilst filming the event with a camera. The sensor position is extracted by applying automatic color tracking to each frame of the recorded video. It is then possible to visualize sound variations across the space in terms of sound pressure, particle velocity, or acoustic intensity. In this paper, not only the theoretical foundations of the method, but also its practical applications are explored such as scanning transfer path analysis, source radiation characterization, operational deflection shapes, virtual phased arrays, material characterization, and acoustic intensity vector field mapping
- …