1,939 research outputs found
Scale Stain: Multi-Resolution Feature Enhancement in Pathology Visualization
Digital whole-slide images of pathological tissue samples have recently
become feasible for use within routine diagnostic practice. These gigapixel
sized images enable pathologists to perform reviews using computer workstations
instead of microscopes. Existing workstations visualize scanned images by
providing a zoomable image space that reproduces the capabilities of the
microscope. This paper presents a novel visualization approach that enables
filtering of the scale-space according to color preference. The visualization
method reveals diagnostically important patterns that are otherwise not
visible. The paper demonstrates how this approach has been implemented into a
fully functional prototype that lets the user navigate the visualization
parameter space in real time. The prototype was evaluated for two common
clinical tasks with eight pathologists in a within-subjects study. The data
reveal that task efficiency increased by 15% using the prototype, with
maintained accuracy. By analyzing behavioral strategies, it was possible to
conclude that efficiency gain was caused by a reduction of the panning needed
to perform systematic search of the images. The prototype system was well
received by the pathologists who did not detect any risks that would hinder use
in clinical routine
Two Decades of Colorization and Decolorization for Images and Videos
Colorization is a computer-aided process, which aims to give color to a gray
image or video. It can be used to enhance black-and-white images, including
black-and-white photos, old-fashioned films, and scientific imaging results. On
the contrary, decolorization is to convert a color image or video into a
grayscale one. A grayscale image or video refers to an image or video with only
brightness information without color information. It is the basis of some
downstream image processing applications such as pattern recognition, image
segmentation, and image enhancement. Different from image decolorization, video
decolorization should not only consider the image contrast preservation in each
video frame, but also respect the temporal and spatial consistency between
video frames. Researchers were devoted to develop decolorization methods by
balancing spatial-temporal consistency and algorithm efficiency. With the
prevalance of the digital cameras and mobile phones, image and video
colorization and decolorization have been paid more and more attention by
researchers. This paper gives an overview of the progress of image and video
colorization and decolorization methods in the last two decades.Comment: 12 pages, 19 figure
Approximated and User Steerable tSNE for Progressive Visual Analytics
Progressive Visual Analytics aims at improving the interactivity in existing
analytics techniques by means of visualization as well as interaction with
intermediate results. One key method for data analysis is dimensionality
reduction, for example, to produce 2D embeddings that can be visualized and
analyzed efficiently. t-Distributed Stochastic Neighbor Embedding (tSNE) is a
well-suited technique for the visualization of several high-dimensional data.
tSNE can create meaningful intermediate results but suffers from a slow
initialization that constrains its application in Progressive Visual Analytics.
We introduce a controllable tSNE approximation (A-tSNE), which trades off speed
and accuracy, to enable interactive data exploration. We offer real-time
visualization techniques, including a density-based solution and a Magic Lens
to inspect the degree of approximation. With this feedback, the user can decide
on local refinements and steer the approximation level during the analysis. We
demonstrate our technique with several datasets, in a real-world research
scenario and for the real-time analysis of high-dimensional streams to
illustrate its effectiveness for interactive data analysis
Reducing Ambiguities in Line-based Density Plots by Image-space Colorization
Line-based density plots are used to reduce visual clutter in line charts
with a multitude of individual lines. However, these traditional density plots
are often perceived ambiguously, which obstructs the user's identification of
underlying trends in complex datasets. Thus, we propose a novel image space
coloring method for line-based density plots that enhances their
interpretability. Our method employs color not only to visually communicate
data density but also to highlight similar regions in the plot, allowing users
to identify and distinguish trends easily. We achieve this by performing
hierarchical clustering based on the lines passing through each region and
mapping the identified clusters to the hue circle using circular MDS.
Additionally, we propose a heuristic approach to assign each line to the most
probable cluster, enabling users to analyze density and individual lines. We
motivate our method by conducting a small-scale user study, demonstrating the
effectiveness of our method using synthetic and real-world datasets, and
providing an interactive online tool for generating colored line-based density
plots
Artifact-Based Rendering: Harnessing Natural and Traditional Visual Media for More Expressive and Engaging 3D Visualizations
We introduce Artifact-Based Rendering (ABR), a framework of tools,
algorithms, and processes that makes it possible to produce real, data-driven
3D scientific visualizations with a visual language derived entirely from
colors, lines, textures, and forms created using traditional physical media or
found in nature. A theory and process for ABR is presented to address three
current needs: (i) designing better visualizations by making it possible for
non-programmers to rapidly design and critique many alternative data-to-visual
mappings; (ii) expanding the visual vocabulary used in scientific
visualizations to depict increasingly complex multivariate data; (iii) bringing
a more engaging, natural, and human-relatable handcrafted aesthetic to data
visualization. New tools and algorithms to support ABR include front-end
applets for constructing artifact-based colormaps, optimizing 3D scanned meshes
for use in data visualization, and synthesizing textures from artifacts. These
are complemented by an interactive rendering engine with custom algorithms and
interfaces that demonstrate multiple new visual styles for depicting point,
line, surface, and volume data. A within-the-research-team design study
provides early evidence of the shift in visualization design processes that ABR
is believed to enable when compared to traditional scientific visualization
systems. Qualitative user feedback on applications to climate science and brain
imaging support the utility of ABR for scientific discovery and public
communication.Comment: Published in IEEE VIS 2019, 9 pages of content with 2 pages of
references, 12 figure
Semantic portrait color transfer with internet images
We present a novel color transfer method for portraits by exploring their high-level semantic information. First, a database is set up which consists of a collection of portrait images download from the Internet, and each of them is manually segmented using image matting as a preprocessing step. Second, we search the database using Face++ to find the images with similar poses to a given source portrait image, and choose one satisfactory image from the results as the target. Third, we extract portrait foregrounds from both source and target images. Then, the system extracts the semantic information, such as faces, eyes, eyebrows, lips, teeth, etc., from the extracted foreground of the source using image matting algorithms. After that, we perform color transfer between corresponding parts with the same semantic information. We get the final transferred result by seamlessly compositing different parts together using alpha blending. Experimental results show that our semantics-driven approach can generate better color transfer results for portraits than previous methods and provide users a new means to retouch their portraits
Nonrigid reconstruction of 3D breast surfaces with a low-cost RGBD camera for surgical planning and aesthetic evaluation
Accounting for 26% of all new cancer cases worldwide, breast cancer remains
the most common form of cancer in women. Although early breast cancer has a
favourable long-term prognosis, roughly a third of patients suffer from a
suboptimal aesthetic outcome despite breast conserving cancer treatment.
Clinical-quality 3D modelling of the breast surface therefore assumes an
increasingly important role in advancing treatment planning, prediction and
evaluation of breast cosmesis. Yet, existing 3D torso scanners are expensive
and either infrastructure-heavy or subject to motion artefacts. In this paper
we employ a single consumer-grade RGBD camera with an ICP-based registration
approach to jointly align all points from a sequence of depth images
non-rigidly. Subtle body deformation due to postural sway and respiration is
successfully mitigated leading to a higher geometric accuracy through
regularised locally affine transformations. We present results from 6 clinical
cases where our method compares well with the gold standard and outperforms a
previous approach. We show that our method produces better reconstructions
qualitatively by visual assessment and quantitatively by consistently obtaining
lower landmark error scores and yielding more accurate breast volume estimates
Design and Interpretability of Contour Lines for Visualizing Multivariate Data
Multivariate geospatial data are commonly visualized using contour plots, where the plots for various attributes are often examined side by side, or using color blending. As the number of attributes grows, however, these approaches become less efficient. This limitation motivated the use of glyphs, where different attributes are mapped to different pre-attentive features of the glyphs. Since both contour plot overlays and glyphs clutter the underlying map, in this paper we examine whether contour lines, which are already present in map space, can be leveraged to visualize multivariate geospatial data.
We present five different designs for stylizing contour lines, and investigate their interpretability using three crowdsourced studies. We evaluated the designs through a set of common geospatial data analysis tasks on a four-dimensional dataset. Our first two studies examined how the contour line width and the number of contour intervals affect interpretability, using synthetic datasets where we controlled the underlying data distribution. Study 1 revealed that the increase of width improves the task performance in most of the designs, specially in completion time, except some scenarios where reducing width does not affect performance where the visibility of the background is critical. In Study 2, we found out that fewer contour intervals lead to less visual clutter, hence improved performance. We then compared the designs in a third study that used both synthetic and real-life meteorological data. The study revealed that the results found using synthetic data were generalizable to the real-life data, as hypothesized. Moreover, we formulated a design recommendation table tuned to give users task- and category-specific design suggestions under various environment constraints. At last, we discuss the comparison between the lab and online versions of study 1 with respect to display size (lab study was done on big screen and vice versa).
Our studies show the effectiveness of stylizing contour lines to represent multivariate data, reveal trade-offs among design parameters, and provide designers with important insights into the factors that influence multivariate interpretability. We also show some real-life scenarios where our visualization approach may improve decision making
- …