48,564 research outputs found
The State of the Art in Cartograms
Cartograms combine statistical and geographical information in thematic maps,
where areas of geographical regions (e.g., countries, states) are scaled in
proportion to some statistic (e.g., population, income). Cartograms make it
possible to gain insight into patterns and trends in the world around us and
have been very popular visualizations for geo-referenced data for over a
century. This work surveys cartogram research in visualization, cartography and
geometry, covering a broad spectrum of different cartogram types: from the
traditional rectangular and table cartograms, to Dorling and diffusion
cartograms. A particular focus is the study of the major cartogram dimensions:
statistical accuracy, geographical accuracy, and topological accuracy. We
review the history of cartograms, describe the algorithms for generating them,
and consider task taxonomies. We also review quantitative and qualitative
evaluations, and we use these to arrive at design guidelines and research
challenges
Superpixels: An Evaluation of the State-of-the-Art
Superpixels group perceptually similar pixels to create visually meaningful
entities while heavily reducing the number of primitives for subsequent
processing steps. As of these properties, superpixel algorithms have received
much attention since their naming in 2003. By today, publicly available
superpixel algorithms have turned into standard tools in low-level vision. As
such, and due to their quick adoption in a wide range of applications,
appropriate benchmarks are crucial for algorithm selection and comparison.
Until now, the rapidly growing number of algorithms as well as varying
experimental setups hindered the development of a unifying benchmark. We
present a comprehensive evaluation of 28 state-of-the-art superpixel algorithms
utilizing a benchmark focussing on fair comparison and designed to provide new
insights relevant for applications. To this end, we explicitly discuss
parameter optimization and the importance of strictly enforcing connectivity.
Furthermore, by extending well-known metrics, we are able to summarize
algorithm performance independent of the number of generated superpixels,
thereby overcoming a major limitation of available benchmarks. Furthermore, we
discuss runtime, robustness against noise, blur and affine transformations,
implementation details as well as aspects of visual quality. Finally, we
present an overall ranking of superpixel algorithms which redefines the
state-of-the-art and enables researchers to easily select appropriate
algorithms and the corresponding implementations which themselves are made
publicly available as part of our benchmark at
davidstutz.de/projects/superpixel-benchmark/
Recommended from our members
Multitarget Tracking in Nonoverlapping Cameras Using a Reference Set
Tracking multiple targets in nonoverlapping cameras are challenging since the observations of the same targets are often separated by time and space. There might be significant appearance change of a target across camera views caused by variations in illumination conditions, poses, and camera imaging characteristics. Consequently, the same target may appear very different in two cameras. Therefore, associating tracks in different camera views directly based on their appearance similarity is difficult and prone to error. In most previous methods, the appearance similarity is computed either using color histograms or based on pretrained brightness transfer function that maps color between cameras. In this paper, a novel reference set based appearance model is proposed to improve multitarget tracking in a network of nonoverlapping cameras. Contrary to previous work, a reference set is constructed for a pair of cameras, containing subjects appearing in both camera views. For track association, instead of directly comparing the appearance of two targets in different camera views, they are compared indirectly via the reference set. Besides global color histograms, texture and shape features are extracted at different locations of a target, and AdaBoost is used to learn the discriminative power of each feature. The effectiveness of the proposed method over the state of the art on two challenging real-world multicamera video data sets is demonstrated by thorough experiments
A Qualitative and Quantitative Evaluation of 8 Clear Sky Models
We provide a qualitative and quantitative evaluation of 8 clear sky models
used in Computer Graphics. We compare the models with each other as well as
with measurements and with a reference model from the physics community. After
a short summary of the physics of the problem, we present the measurements and
the reference model, and how we "invert" it to get the model parameters. We
then give an overview of each CG model, and detail its scope, its algorithmic
complexity, and its results using the same parameters as in the reference
model. We also compare the models with a perceptual study. Our quantitative
results confirm that the less simplifications and approximations are used to
solve the physical equations, the more accurate are the results. We conclude
with a discussion of the advantages and drawbacks of each model, and how to
further improve their accuracy
Evaluation of CNN-based Single-Image Depth Estimation Methods
While an increasing interest in deep models for single-image depth estimation
methods can be observed, established schemes for their evaluation are still
limited. We propose a set of novel quality criteria, allowing for a more
detailed analysis by focusing on specific characteristics of depth maps. In
particular, we address the preservation of edges and planar regions, depth
consistency, and absolute distance accuracy. In order to employ these metrics
to evaluate and compare state-of-the-art single-image depth estimation
approaches, we provide a new high-quality RGB-D dataset. We used a DSLR camera
together with a laser scanner to acquire high-resolution images and highly
accurate depth maps. Experimental results show the validity of our proposed
evaluation protocol
Leveraging Citation Networks to Visualize Scholarly Influence Over Time
Assessing the influence of a scholar's work is an important task for funding
organizations, academic departments, and researchers. Common methods, such as
measures of citation counts, can ignore much of the nuance and
multidimensionality of scholarly influence. We present an approach for
generating dynamic visualizations of scholars' careers. This approach uses an
animated node-link diagram showing the citation network accumulated around the
researcher over the course of the career in concert with key indicators,
highlighting influence both within and across fields. We developed our design
in collaboration with one funding organization---the Pew Biomedical Scholars
program---but the methods are generalizable to visualizations of scholarly
influence. We applied the design method to the Microsoft Academic Graph, which
includes more than 120 million publications. We validate our abstractions
throughout the process through collaboration with the Pew Biomedical Scholars
program officers and summative evaluations with their scholars
- …