178 research outputs found
Up in the Air Over Taxing Frequent Flyer Benefits: The American, Canadian, and Australian Experiences
Vessel segmentation is an important prerequisite for many medical applications. While automatic vessel segmentation is an active field of research, interaction and visualization techniques for semi-automatic solutions have gotten far less attention. Nevertheless, since automatic techniques do not generally achieve perfect results, interaction is necessary. Especially for tasks that require an in-detail inspection or analysis of the shape of vascular structures precise segmentations are essential. However, in many cases these can only be generated by incorporating expert knowledge. In this paper we propose a visual vessel segmentation system that allows the user to interactively generate vessel segmentations. Therefore, we employ multiple linked views which allow to assess different aspects of the segmentation and depict its different quality metrics. Based on these quality metrics, the user is guided, can assess the segmentation quality in detail and modify the segmentation accordingly. One common modification is the editing of branches, for which we propose a semi-automatic sketch-based interaction metaphor. Additionally, the user can also influence the shape of the vessel wall or the centerline through sketching. To assess the value of our system we discuss feedback from medical experts and have performed a thorough evaluation
Total Denoising: Unsupervised Learning of 3D Point Cloud Cleaning
We show that denoising of 3D point clouds can be learned unsupervised,
directly from noisy 3D point cloud data only. This is achieved by extending
recent ideas from learning of unsupervised image denoisers to unstructured 3D
point clouds. Unsupervised image denoisers operate under the assumption that a
noisy pixel observation is a random realization of a distribution around a
clean pixel value, which allows appropriate learning on this distribution to
eventually converge to the correct value. Regrettably, this assumption is not
valid for unstructured points: 3D point clouds are subject to total noise, i.
e., deviations in all coordinates, with no reliable pixel grid. Thus, an
observation can be the realization of an entire manifold of clean 3D points,
which makes a na\"ive extension of unsupervised image denoisers to 3D point
clouds impractical. Overcoming this, we introduce a spatial prior term, that
steers converges to the unique closest out of the many possible modes on a
manifold. Our results demonstrate unsupervised denoising performance similar to
that of supervised learning with clean data when given enough training examples
- whereby we do not need any pairs of noisy and clean training data.Comment: Proceedings of ICCV 201
Single-image Tomography: 3D Volumes from 2D Cranial X-Rays
As many different 3D volumes could produce the same 2D x-ray image, inverting
this process is challenging. We show that recent deep learning-based
convolutional neural networks can solve this task. As the main challenge in
learning is the sheer amount of data created when extending the 2D image into a
3D volume, we suggest firstly to learn a coarse, fixed-resolution volume which
is then fused in a second step with the input x-ray into a high-resolution
volume. To train and validate our approach we introduce a new dataset that
comprises of close to half a million computer-simulated 2D x-ray images of 3D
volumes scanned from 175 mammalian species. Applications of our approach
include stereoscopic rendering of legacy x-ray images, re-rendering of x-rays
including changes of illumination, view pose or geometry. Our evaluation
includes comparison to previous tomography work, previous learning methods
using our data, a user study and application to a set of real x-rays
exploRNN: Understanding Recurrent Neural Networks through Visual Exploration
Due to the success of deep learning and its growing job market, students and
researchers from many areas are getting interested in learning about deep
learning technologies. Visualization has proven to be of great help during this
learning process, while most current educational visualizations are targeted
towards one specific architecture or use case. Unfortunately, recurrent neural
networks (RNNs), which are capable of processing sequential data, are not
covered yet, despite the fact that tasks on sequential data, such as text and
function analysis, are at the forefront of deep learning research. Therefore,
we propose exploRNN, the first interactively explorable, educational
visualization for RNNs. exploRNN allows for interactive experimentation with
RNNs, and provides in-depth information on their functionality and behavior
during training. By defining educational objectives targeted towards
understanding RNNs, and using these as guidelines throughout the visual design
process, we have designed exploRNN to communicate the most important concepts
of RNNs directly within a web browser. By means of exploRNN, we provide an
overview of the training process of RNNs at a coarse level, while also allowing
detailed inspection of the data-flow within LSTM cells. Within this paper, we
motivate our design of exploRNN, detail its realization, and discuss the
results of a user study investigating the benefits of exploRNN
Leveraging Self-Supervised Vision Transformers for Neural Transfer Function Design
In volume rendering, transfer functions are used to classify structures of
interest, and to assign optical properties such as color and opacity. They are
commonly defined as 1D or 2D functions that map simple features to these
optical properties. As the process of designing a transfer function is
typically tedious and unintuitive, several approaches have been proposed for
their interactive specification. In this paper, we present a novel method to
define transfer functions for volume rendering by leveraging the feature
extraction capabilities of self-supervised pre-trained vision transformers. To
design a transfer function, users simply select the structures of interest in a
slice viewer, and our method automatically selects similar structures based on
the high-level features extracted by the neural network. Contrary to previous
learning-based transfer function approaches, our method does not require
training of models and allows for quick inference, enabling an interactive
exploration of the volume data. Our approach reduces the amount of necessary
annotations by interactively informing the user about the current
classification, so they can focus on annotating the structures of interest that
still require annotation. In practice, this allows users to design transfer
functions within seconds, instead of minutes. We compare our method to existing
learning-based approaches in terms of annotation and compute time, as well as
with respect to segmentation accuracy. Our accompanying video showcases the
interactivity and effectiveness of our method
Spatially Guiding Unsupervised Semantic Segmentation Through Depth-Informed Feature Distillation and Sampling
Traditionally, training neural networks to perform semantic segmentation
required expensive human-made annotations. But more recently, advances in the
field of unsupervised learning have made significant progress on this issue and
towards closing the gap to supervised algorithms. To achieve this, semantic
knowledge is distilled by learning to correlate randomly sampled features from
images across an entire dataset. In this work, we build upon these advances by
incorporating information about the structure of the scene into the training
process through the use of depth information. We achieve this by (1) learning
depth-feature correlation by spatially correlate the feature maps with the
depth maps to induce knowledge about the structure of the scene and (2)
implementing farthest-point sampling to more effectively select relevant
features by utilizing 3D sampling techniques on depth information of the scene.
Finally, we demonstrate the effectiveness of our technical contributions
through extensive experimentation and present significant improvements in
performance across multiple benchmark datasets
- …