51,055 research outputs found
Selection Bias Tracking and Detailed Subset Comparison for High-Dimensional Data
The collection of large, complex datasets has become common across a wide
variety of domains. Visual analytics tools increasingly play a key role in
exploring and answering complex questions about these large datasets. However,
many visualizations are not designed to concurrently visualize the large number
of dimensions present in complex datasets (e.g. tens of thousands of distinct
codes in an electronic health record system). This fact, combined with the
ability of many visual analytics systems to enable rapid, ad-hoc specification
of groups, or cohorts, of individuals based on a small subset of visualized
dimensions, leads to the possibility of introducing selection bias--when the
user creates a cohort based on a specified set of dimensions, differences
across many other unseen dimensions may also be introduced. These unintended
side effects may result in the cohort no longer being representative of the
larger population intended to be studied, which can negatively affect the
validity of subsequent analyses. We present techniques for selection bias
tracking and visualization that can be incorporated into high-dimensional
exploratory visual analytics systems, with a focus on medical data with
existing data hierarchies. These techniques include: (1) tree-based cohort
provenance and visualization, with a user-specified baseline cohort that all
other cohorts are compared against, and visual encoding of the drift for each
cohort, which indicates where selection bias may have occurred, and (2) a set
of visualizations, including a novel icicle-plot based visualization, to
compare in detail the per-dimension differences between the baseline and a
user-specified focus cohort. These techniques are integrated into a medical
temporal event sequence visual analytics tool. We present example use cases and
report findings from domain expert user interviews.Comment: IEEE Transactions on Visualization and Computer Graphics (TVCG),
Volume 26 Issue 1, 2020. Also part of proceedings for IEEE VAST 201
Parallelized and Vectorized Tracking Using Kalman Filters with CMS Detector Geometry and Events
The High-Luminosity Large Hadron Collider at CERN will be characterized by
greater pileup of events and higher occupancy, making the track reconstruction
even more computationally demanding. Existing algorithms at the LHC are based
on Kalman filter techniques with proven excellent physics performance under a
variety of conditions. Starting in 2014, we have been developing
Kalman-filter-based methods for track finding and fitting adapted for many-core
SIMD processors that are becoming dominant in high-performance systems.
This paper summarizes the latest extensions to our software that allow it to
run on the realistic CMS-2017 tracker geometry using CMSSW-generated events,
including pileup. The reconstructed tracks can be validated against either the
CMSSW simulation that generated the hits, or the CMSSW reconstruction of the
tracks. In general, the code's computational performance has continued to
improve while the above capabilities were being added. We demonstrate that the
present Kalman filter implementation is able to reconstruct events with
comparable physics performance to CMSSW, while providing generally better
computational performance. Further plans for advancing the software are
discussed
Gaze Distribution Analysis and Saliency Prediction Across Age Groups
Knowledge of the human visual system helps to develop better computational
models of visual attention. State-of-the-art models have been developed to
mimic the visual attention system of young adults that, however, largely ignore
the variations that occur with age. In this paper, we investigated how visual
scene processing changes with age and we propose an age-adapted framework that
helps to develop a computational model that can predict saliency across
different age groups. Our analysis uncovers how the explorativeness of an
observer varies with age, how well saliency maps of an age group agree with
fixation points of observers from the same or different age groups, and how age
influences the center bias. We analyzed the eye movement behavior of 82
observers belonging to four age groups while they explored visual scenes.
Explorativeness was quantified in terms of the entropy of a saliency map, and
area under the curve (AUC) metrics was used to quantify the agreement analysis
and the center bias. These results were used to develop age adapted saliency
models. Our results suggest that the proposed age-adapted saliency model
outperforms existing saliency models in predicting the regions of interest
across age groups
Selection Bias in News Coverage: Learning it, Fighting it
News entities must select and filter the coverage they broadcast through
their respective channels since the set of world events is too large to be
treated exhaustively. The subjective nature of this filtering induces biases
due to, among other things, resource constraints, editorial guidelines,
ideological affinities, or even the fragmented nature of the information at a
journalist's disposal. The magnitude and direction of these biases are,
however, widely unknown. The absence of ground truth, the sheer size of the
event space, or the lack of an exhaustive set of absolute features to measure
make it difficult to observe the bias directly, to characterize the leaning's
nature and to factor it out to ensure a neutral coverage of the news. In this
work, we introduce a methodology to capture the latent structure of media's
decision process on a large scale. Our contribution is multi-fold. First, we
show media coverage to be predictable using personalization techniques, and
evaluate our approach on a large set of events collected from the GDELT
database. We then show that a personalized and parametrized approach not only
exhibits higher accuracy in coverage prediction, but also provides an
interpretable representation of the selection bias. Last, we propose a method
able to select a set of sources by leveraging the latent representation. These
selected sources provide a more diverse and egalitarian coverage, all while
retaining the most actively covered events
General Defocusing Particle Tracking: fundamentals and uncertainty assessment
General Defocusing Particle Tracking (GDPT) is a single-camera,
three-dimensional particle tracking method that determines the particle depth
positions from the defocusing patterns of the corresponding particle images.
GDPT relies on a reference set of experimental particle images which is used to
predict the depth position of measured particle images of similar shape. While
several implementations of the method are possible, its accuracy is ultimately
limited by some intrinsic properties of the acquired data, such as the
signal-to-noise ratio, the particle concentration, as well as the
characteristics of the defocusing patterns. GDPT has been applied in different
fields by different research groups, however, a deeper description and analysis
of the method fundamentals has hitherto not been available. In this work, we
first identity the fundamental elements that characterize a GDPT measurement.
Afterwards, we present a standardized framework based on synthetic images to
assess the performance of GDPT implementations in terms of measurement
uncertainty and relative number of measured particles. Finally, we provide
guidelines to assess the uncertainty of experimental GDPT measurements, where
true values are not accessible and additional image aberrations can lead to
bias errors. The data were processed using DefocusTracker, an open-source GDPT
software. The datasets were created using the synthetic image generator
MicroSIG and have been shared in a freely-accessible repository
Recommended from our members
Multiparticle azimuthal correlations for extracting event-by-event elliptic and triangular flow in Au + Au collisions at sNN =200 GeV
We present measurements of elliptic and triangular azimuthal anisotropy of charged particles detected at forward rapidity 1<|η|<3 in Au + Au collisions at sNN=200 GeV, as a function of centrality. The multiparticle cumulant technique is used to obtain the elliptic flow coefficients v2{2},v2{4},v2{6}, and v2{8}, and triangular flow coefficients v3{2} and v3{4}. Using the small-variance limit, we estimate the mean and variance of the event-by-event v2 distribution from v2{2} and v2{4}. In a complementary analysis, we also use a folding procedure to study the distributions of v2 and v3 directly, extracting both the mean and variance. Implications for initial geometrical fluctuations and their translation into the final-state momentum distributions are discussed
Tracking moving optima using Kalman-based predictions
The dynamic optimization problem concerns finding an optimum in a changing environment. In the field of evolutionary algorithms, this implies dealing with a timechanging fitness landscape. In this paper we compare different techniques for integrating motion information into an evolutionary algorithm, in the case it has to follow a time-changing optimum, under the assumption that the changes follow a nonrandom law. Such a law can be estimated in order to improve the optimum tracking capabilities of the algorithm. In particular, we will focus on first order dynamical laws to track moving objects. A vision-based tracking robotic application is used as testbed for experimental comparison
Visual object tracking performance measures revisited
The problem of visual tracking evaluation is sporting a large variety of
performance measures, and largely suffers from lack of consensus about which
measures should be used in experiments. This makes the cross-paper tracker
comparison difficult. Furthermore, as some measures may be less effective than
others, the tracking results may be skewed or biased towards particular
tracking aspects. In this paper we revisit the popular performance measures and
tracker performance visualizations and analyze them theoretically and
experimentally. We show that several measures are equivalent from the point of
information they provide for tracker comparison and, crucially, that some are
more brittle than the others. Based on our analysis we narrow down the set of
potential measures to only two complementary ones, describing accuracy and
robustness, thus pushing towards homogenization of the tracker evaluation
methodology. These two measures can be intuitively interpreted and visualized
and have been employed by the recent Visual Object Tracking (VOT) challenges as
the foundation for the evaluation methodology
- âŠ