32,236 research outputs found
Adaptive transfer functions: improved multiresolution visualization of medical models
The final publication is available at Springer via http://dx.doi.org/10.1007/s00371-016-1253-9Medical datasets are continuously increasing in size. Although larger models may be available for certain research purposes, in the common clinical practice the models are usually of up to 512x512x2000 voxels. These resolutions exceed the capabilities of conventional GPUs, the ones usually found in the medical doctors’ desktop PCs. Commercial solutions typically reduce the data by downsampling the dataset iteratively until it fits the available target specifications. The data loss reduces the visualization quality and this is not commonly compensated with other actions that might alleviate its effects. In this paper, we propose adaptive transfer functions, an algorithm that improves the transfer function in downsampled multiresolution models so that the quality of renderings is highly improved. The technique is simple and lightweight, and it is suitable, not only to visualize huge models that would not fit in a GPU, but also to render not-so-large models in mobile GPUs, which are less capable than their desktop counterparts. Moreover, it can also be used to accelerate rendering frame rates using lower levels of the multiresolution hierarchy while still maintaining high-quality results in a focus and context approach. We also show an evaluation of these results based on perceptual metrics.Peer ReviewedPostprint (author's final draft
Scale Stain: Multi-Resolution Feature Enhancement in Pathology Visualization
Digital whole-slide images of pathological tissue samples have recently
become feasible for use within routine diagnostic practice. These gigapixel
sized images enable pathologists to perform reviews using computer workstations
instead of microscopes. Existing workstations visualize scanned images by
providing a zoomable image space that reproduces the capabilities of the
microscope. This paper presents a novel visualization approach that enables
filtering of the scale-space according to color preference. The visualization
method reveals diagnostically important patterns that are otherwise not
visible. The paper demonstrates how this approach has been implemented into a
fully functional prototype that lets the user navigate the visualization
parameter space in real time. The prototype was evaluated for two common
clinical tasks with eight pathologists in a within-subjects study. The data
reveal that task efficiency increased by 15% using the prototype, with
maintained accuracy. By analyzing behavioral strategies, it was possible to
conclude that efficiency gain was caused by a reduction of the panning needed
to perform systematic search of the images. The prototype system was well
received by the pathologists who did not detect any risks that would hinder use
in clinical routine
Exploring the spectroscopic diversity of type Ia supernovae with DRACULA: a machine learning approach
The existence of multiple subclasses of type Ia supernovae (SNeIa) has been
the subject of great debate in the last decade. One major challenge inevitably
met when trying to infer the existence of one or more subclasses is the time
consuming, and subjective, process of subclass definition. In this work, we
show how machine learning tools facilitate identification of subtypes of SNeIa
through the establishment of a hierarchical group structure in the continuous
space of spectral diversity formed by these objects. Using Deep Learning, we
were capable of performing such identification in a 4 dimensional feature space
(+1 for time evolution), while the standard Principal Component Analysis barely
achieves similar results using 15 principal components. This is evidence that
the progenitor system and the explosion mechanism can be described by a small
number of initial physical parameters. As a proof of concept, we show that our
results are in close agreement with a previously suggested classification
scheme and that our proposed method can grasp the main spectral features behind
the definition of such subtypes. This allows the confirmation of the velocity
of lines as a first order effect in the determination of SNIa subtypes,
followed by 91bg-like events. Given the expected data deluge in the forthcoming
years, our proposed approach is essential to allow a quick and statistically
coherent identification of SNeIa subtypes (and outliers). All tools used in
this work were made publicly available in the Python package Dimensionality
Reduction And Clustering for Unsupervised Learning in Astronomy (DRACULA) and
can be found within COINtoolbox (https://github.com/COINtoolbox/DRACULA).Comment: 16 pages, 12 figures, accepted for publication in MNRA
DRLViz: Understanding Decisions and Memory in Deep Reinforcement Learning
We present DRLViz, a visual analytics interface to interpret the internal
memory of an agent (e.g. a robot) trained using deep reinforcement learning.
This memory is composed of large temporal vectors updated when the agent moves
in an environment and is not trivial to understand due to the number of
dimensions, dependencies to past vectors, spatial/temporal correlations, and
co-correlation between dimensions. It is often referred to as a black box as
only inputs (images) and outputs (actions) are intelligible for humans. Using
DRLViz, experts are assisted to interpret decisions using memory reduction
interactions, and to investigate the role of parts of the memory when errors
have been made (e.g. wrong direction). We report on DRLViz applied in the
context of video games simulators (ViZDoom) for a navigation scenario with item
gathering tasks. We also report on experts evaluation using DRLViz, and
applicability of DRLViz to other scenarios and navigation problems beyond
simulation games, as well as its contribution to black box models
interpretability and explainability in the field of visual analytics
- …