533 research outputs found
Local Equivalence and Intrinsic Metrics between Reeb Graphs
As graphical summaries for topological spaces and maps, Reeb graphs are
common objects in the computer graphics or topological data analysis
literature. Defining good metrics between these objects has become an important
question for applications, where it matters to quantify the extent by which two
given Reeb graphs differ. Recent contributions emphasize this aspect, proposing
novel distances such as {\em functional distortion} or {\em interleaving} that
are provably more discriminative than the so-called {\em bottleneck distance},
being true metrics whereas the latter is only a pseudo-metric. Their main
drawback compared to the bottleneck distance is to be comparatively hard (if at
all possible) to evaluate. Here we take the opposite view on the problem and
show that the bottleneck distance is in fact good enough {\em locally}, in the
sense that it is able to discriminate a Reeb graph from any other Reeb graph in
a small enough neighborhood, as efficiently as the other metrics do. This
suggests considering the {\em intrinsic metrics} induced by these distances,
which turn out to be all {\em globally} equivalent. This novel viewpoint on the
study of Reeb graphs has a potential impact on applications, where one may not
only be interested in discriminating between data but also in interpolating
between them
The Topology ToolKit
This system paper presents the Topology ToolKit (TTK), a software platform
designed for topological data analysis in scientific visualization. TTK
provides a unified, generic, efficient, and robust implementation of key
algorithms for the topological analysis of scalar data, including: critical
points, integral lines, persistence diagrams, persistence curves, merge trees,
contour trees, Morse-Smale complexes, fiber surfaces, continuous scatterplots,
Jacobi sets, Reeb spaces, and more. TTK is easily accessible to end users due
to a tight integration with ParaView. It is also easily accessible to
developers through a variety of bindings (Python, VTK/C++) for fast prototyping
or through direct, dependence-free, C++, to ease integration into pre-existing
complex systems. While developing TTK, we faced several algorithmic and
software engineering challenges, which we document in this paper. In
particular, we present an algorithm for the construction of a discrete gradient
that complies to the critical points extracted in the piecewise-linear setting.
This algorithm guarantees a combinatorial consistency across the topological
abstractions supported by TTK, and importantly, a unified implementation of
topological data simplification for multi-scale exploration and analysis. We
also present a cached triangulation data structure, that supports time
efficient and generic traversals, which self-adjusts its memory usage on demand
for input simplicial meshes and which implicitly emulates a triangulation for
regular grids with no memory overhead. Finally, we describe an original
software architecture, which guarantees memory efficient and direct accesses to
TTK features, while still allowing for researchers powerful and easy bindings
and extensions. TTK is open source (BSD license) and its code, online
documentation and video tutorials are available on TTK's website
A Comparative Study of the Perceptual Sensitivity of Topological Visualizations to Feature Variations
Color maps are a commonly used visualization technique in which data are
mapped to optical properties, e.g., color or opacity. Color maps, however, do
not explicitly convey structures (e.g., positions and scale of features) within
data. Topology-based visualizations reveal and explicitly communicate
structures underlying data. Although we have a good understanding of what types
of features are captured by topological visualizations, our understanding of
people's perception of those features is not. This paper evaluates the
sensitivity of topology-based isocontour, Reeb graph, and persistence diagram
visualizations compared to a reference color map visualization for
synthetically generated scalar fields on 2-manifold triangular meshes embedded
in 3D. In particular, we built and ran a human-subject study that evaluated the
perception of data features characterized by Gaussian signals and measured how
effectively each visualization technique portrays variations of data features
arising from the position and amplitude variation of a mixture of Gaussians.
For positional feature variations, the results showed that only the Reeb graph
visualization had high sensitivity. For amplitude feature variations,
persistence diagrams and color maps demonstrated the highest sensitivity,
whereas isocontours showed only weak sensitivity. These results take an
important step toward understanding which topology-based tools are best for
various data and task scenarios and their effectiveness in conveying
topological variations as compared to conventional color mapping
A discrete Reeb graph approach for the segmentation of human body scans
Segmentation of 3D human body (HB) scan is a very challenging problem in applications exploiting human scan data. To tackle this problem, we propose a topological approach based on discrete Reeb graph (DRG) which is an extension of the classical Reeb graph to unorganized cloud of 3D points. The essence of the approach is detecting critical nodes in the DRG thus permitting the extraction of branches that represent the body parts. Because the human body shape representation is built upon global topological features that are preserved so long as the whole structure of the human body does not change, our approach is quite robust against noise, holes, irregular sampling, moderate reference change and posture variation. Experimental results performed on real scan data demonstrate the validity of our method
Doctor of Philosophy
dissertationA broad range of applications capture dynamic data at an unprecedented scale. Independent of the application area, finding intuitive ways to understand the dynamic aspects of these increasingly large data sets remains an interesting and, to some extent, unsolved research problem. Generically, dynamic data sets can be described by some, often hierarchical, notion of feature of interest that exists at each moment in time, and those features evolve across time. Consequently, exploring the evolution of these features is considered to be one natural way of studying these data sets. Usually, this process entails the ability to: 1) define and extract features from each time step in the data set; 2) find their correspondences over time; and 3) analyze their evolution across time. However, due to the large data sizes, visualizing the evolution of features in a comprehensible manner and performing interactive changes are challenging. Furthermore, feature evolution details are often unmanageably large and complex, making it difficult to identify the temporal trends in the underlying data. Additionally, many existing approaches develop these components in a specialized and standalone manner, thus failing to address the general task of understanding feature evolution across time. This dissertation demonstrates that interactive exploration of feature evolution can be achieved in a non-domain-specific manner so that it can be applied across a wide variety of application domains. In particular, a novel generic visualization and analysis environment that couples a multiresolution unified spatiotemporal representation of features with progressive layout and visualization strategies for studying the feature evolution across time is introduced. This flexible framework enables on-the-fly changes to feature definitions, their correspondences, and other arbitrary attributes while providing an interactive view of the resulting feature evolution details. Furthermore, to reduce the visual complexity within the feature evolution details, several subselection-based and localized, per-feature parameter value-based strategies are also enabled. The utility and generality of this framework is demonstrated by using several large-scale dynamic data sets
Geometry-Driven Detection, Tracking and Visual Analysis of Viscous and Gravitational Fingers
Viscous and gravitational flow instabilities cause a displacement front to
break up into finger-like fluids. The detection and evolutionary analysis of
these fingering instabilities are critical in multiple scientific disciplines
such as fluid mechanics and hydrogeology. However, previous detection methods
of the viscous and gravitational fingers are based on density thresholding,
which provides limited geometric information of the fingers. The geometric
structures of fingers and their evolution are important yet little studied in
the literature. In this work, we explore the geometric detection and evolution
of the fingers in detail to elucidate the dynamics of the instability. We
propose a ridge voxel detection method to guide the extraction of finger cores
from three-dimensional (3D) scalar fields. After skeletonizing finger cores
into skeletons, we design a spanning tree based approach to capture how fingers
branch spatially from the finger skeletons. Finally, we devise a novel
geometric-glyph augmented tracking graph to study how the fingers and their
branches grow, merge, and split over time. Feedback from earth scientists
demonstrates the usefulness of our approach to performing spatio-temporal
geometric analyses of fingers.Comment: Published at IEEE Transactions on Visualization and Computer Graphic
Lifted Wasserstein Matcher for Fast and Robust Topology Tracking
This paper presents a robust and efficient method for tracking topological
features in time-varying scalar data. Structures are tracked based on the
optimal matching between persistence diagrams with respect to the Wasserstein
metric. This fundamentally relies on solving the assignment problem, a special
case of optimal transport, for all consecutive timesteps. Our approach relies
on two main contributions. First, we revisit the seminal assignment algorithm
by Kuhn and Munkres which we specifically adapt to the problem of matching
persistence diagrams in an efficient way. Second, we propose an extension of
the Wasserstein metric that significantly improves the geometrical stability of
the matching of domain-embedded persistence pairs. We show that this
geometrical lifting has the additional positive side-effect of improving the
assignment matrix sparsity and therefore computing time. The global framework
implements a coarse-grained parallelism by computing persistence diagrams and
finding optimal matchings in parallel for every couple of consecutive
timesteps. Critical trajectories are constructed by associating successively
matched persistence pairs over time. Merging and splitting events are detected
with a geometrical threshold in a post-processing stage. Extensive experiments
on real-life datasets show that our matching approach is an order of magnitude
faster than the seminal Munkres algorithm. Moreover, compared to a modern
approximation method, our method provides competitive runtimes while yielding
exact results. We demonstrate the utility of our global framework by extracting
critical point trajectories from various simulated time-varying datasets and
compare it to the existing methods based on associated overlaps of volumes.
Robustness to noise and temporal resolution downsampling is empirically
demonstrated
Topology dictionary with Markov model for 3D video content-based skimming and description
This paper presents a novel approach to skim and de-scribe 3D videos. 3D video is an imaging technology which consists in a stream of 3D models in motion captured by a synchronized set of video cameras. Each frame is composed of one or several 3D models, and therefore the acquisition of long sequences at video rate requires massive storage de-vices. In order to reduce the storage cost while keeping rele-vant information, we propose to encode 3D video sequences using a topology-based shape descriptor dictionary. This dictionary is either generated from a set of extracted pat-terns or learned from training input sequences with seman-tic annotations. It relies on an unsupervised 3D shape-based clustering of the dataset by Reeb graphs, and features a Markov network to characterize topological changes. The approach allows content-based compression and skimming with accurate recovery of sequences and can handle com-plex topological changes. Redundancies are detected and skipped based on a probabilistic discrimination process. Semantic description of video sequences is then automat-ically performed. In addition, forthcoming frame encoding is achieved using a multiresolution matching scheme and allows action recognition in 3D. Our experiments were per-formed on complex 3D video sequences. We demonstrate the robustness and accuracy of the 3D video skimming with dramatic low bitrate coding and high compression ratio. 1
Task-based Augmented Contour Trees with Fibonacci Heaps
This paper presents a new algorithm for the fast, shared memory, multi-core
computation of augmented contour trees on triangulations. In contrast to most
existing parallel algorithms our technique computes augmented trees, enabling
the full extent of contour tree based applications including data segmentation.
Our approach completely revisits the traditional, sequential contour tree
algorithm to re-formulate all the steps of the computation as a set of
independent local tasks. This includes a new computation procedure based on
Fibonacci heaps for the join and split trees, two intermediate data structures
used to compute the contour tree, whose constructions are efficiently carried
out concurrently thanks to the dynamic scheduling of task parallelism. We also
introduce a new parallel algorithm for the combination of these two trees into
the output global contour tree. Overall, this results in superior time
performance in practice, both in sequential and in parallel thanks to the
OpenMP task runtime. We report performance numbers that compare our approach to
reference sequential and multi-threaded implementations for the computation of
augmented merge and contour trees. These experiments demonstrate the run-time
efficiency of our approach and its scalability on common workstations. We
demonstrate the utility of our approach in data segmentation applications
- âŠ