2,182 research outputs found
The Topology ToolKit
This system paper presents the Topology ToolKit (TTK), a software platform
designed for topological data analysis in scientific visualization. TTK
provides a unified, generic, efficient, and robust implementation of key
algorithms for the topological analysis of scalar data, including: critical
points, integral lines, persistence diagrams, persistence curves, merge trees,
contour trees, Morse-Smale complexes, fiber surfaces, continuous scatterplots,
Jacobi sets, Reeb spaces, and more. TTK is easily accessible to end users due
to a tight integration with ParaView. It is also easily accessible to
developers through a variety of bindings (Python, VTK/C++) for fast prototyping
or through direct, dependence-free, C++, to ease integration into pre-existing
complex systems. While developing TTK, we faced several algorithmic and
software engineering challenges, which we document in this paper. In
particular, we present an algorithm for the construction of a discrete gradient
that complies to the critical points extracted in the piecewise-linear setting.
This algorithm guarantees a combinatorial consistency across the topological
abstractions supported by TTK, and importantly, a unified implementation of
topological data simplification for multi-scale exploration and analysis. We
also present a cached triangulation data structure, that supports time
efficient and generic traversals, which self-adjusts its memory usage on demand
for input simplicial meshes and which implicitly emulates a triangulation for
regular grids with no memory overhead. Finally, we describe an original
software architecture, which guarantees memory efficient and direct accesses to
TTK features, while still allowing for researchers powerful and easy bindings
and extensions. TTK is open source (BSD license) and its code, online
documentation and video tutorials are available on TTK's website
Interactive Visualization for Singular Fibers of Functions f : R3 → R2
Scalar topology in the form of Morse theory has provided computational tools that analyze and visualize data from scientific and engineering tasks. Contracting isocontours to single points encapsulates variations in isocontour connectivity in the Reeb graph. For multivariate data, isocontours generalize to fibers—inverse images of points in the range, and this area is therefore known as fiber topology. However, fiber topology is less fully developed than Morse theory, and current efforts rely on manual visualizations.
This paper presents how to accelerate and semi-automate this task through an interface for visualizing fiber singularities of multivariate functions R3 → R2. This interface exploits existing conventions of fiber topology, but also introduces a 3D view based on the extension of Reeb graphs to Reeb spaces. Using the Joint Contour Net, a quantized approximation of the Reeb space, this accelerates topological visualization and permits online perturbation to reduce or remove degeneracies in functions under study. Validation of the interface is performed by assessing whether the interface supports the mathematical workflow both of experts and of less experienced mathematicians
Artifact-Based Rendering: Harnessing Natural and Traditional Visual Media for More Expressive and Engaging 3D Visualizations
We introduce Artifact-Based Rendering (ABR), a framework of tools,
algorithms, and processes that makes it possible to produce real, data-driven
3D scientific visualizations with a visual language derived entirely from
colors, lines, textures, and forms created using traditional physical media or
found in nature. A theory and process for ABR is presented to address three
current needs: (i) designing better visualizations by making it possible for
non-programmers to rapidly design and critique many alternative data-to-visual
mappings; (ii) expanding the visual vocabulary used in scientific
visualizations to depict increasingly complex multivariate data; (iii) bringing
a more engaging, natural, and human-relatable handcrafted aesthetic to data
visualization. New tools and algorithms to support ABR include front-end
applets for constructing artifact-based colormaps, optimizing 3D scanned meshes
for use in data visualization, and synthesizing textures from artifacts. These
are complemented by an interactive rendering engine with custom algorithms and
interfaces that demonstrate multiple new visual styles for depicting point,
line, surface, and volume data. A within-the-research-team design study
provides early evidence of the shift in visualization design processes that ABR
is believed to enable when compared to traditional scientific visualization
systems. Qualitative user feedback on applications to climate science and brain
imaging support the utility of ABR for scientific discovery and public
communication.Comment: Published in IEEE VIS 2019, 9 pages of content with 2 pages of
references, 12 figure
Computational Linguistics and Natural Language Processing
This chapter provides an introduction to computational linguistics methods,
with focus on their applications to the practice and study of translation. It
covers computational models, methods and tools for collection, storage,
indexing and analysis of linguistic data in the context of translation, and
discusses the main methodological issues and challenges in this field. While an
exhaustive review of existing computational linguistics methods and tools is
beyond the scope of this chapter, we describe the most representative
approaches, and illustrate them with descriptions of typical applications.Comment: This is the unedited author's copy of a text which appeared as a
chapter in "The Routledge Handbook of Translation and Methodology'', edited
by F Zanettin and C Rundle (2022
Integrating Multiple Sketch Recognition Methods to Improve Accuracy and Speed
Sketch recognition is the computer understanding of hand drawn diagrams. Recognizing sketches instantaneously is necessary to build beautiful interfaces with real time feedback. There are various techniques to quickly recognize sketches into ten or twenty classes. However for much larger datasets of sketches from a large number of classes, these existing techniques can take an extended period of time to accurately classify an incoming sketch and require significant computational overhead. Thus, to make classification of large datasets feasible, we propose using multiple stages of recognition.
In the initial stage, gesture-based feature values are calculated and the trained model is used to classify the incoming sketch. Sketches with an accuracy less than a threshold value, go through a second stage of geometric recognition techniques. In the second geometric stage, the sketch is segmented, and sent to shape-specific recognizers. The sketches are matched against predefined shape descriptions, and confidence values are calculated. The system outputs a list of classes that the sketch could be classified as, along with the accuracy, and precision for each sketch. This process both significantly reduces the time taken to classify such huge datasets of sketches, and increases both the accuracy and precision of the recognition
Integrating Multiple Sketch Recognition Methods to Improve Accuracy and Speed
Sketch recognition is the computer understanding of hand drawn diagrams. Recognizing sketches instantaneously is necessary to build beautiful interfaces with real time feedback. There are various techniques to quickly recognize sketches into ten or twenty classes. However for much larger datasets of sketches from a large number of classes, these existing techniques can take an extended period of time to accurately classify an incoming sketch and require significant computational overhead. Thus, to make classification of large datasets feasible, we propose using multiple stages of recognition.
In the initial stage, gesture-based feature values are calculated and the trained model is used to classify the incoming sketch. Sketches with an accuracy less than a threshold value, go through a second stage of geometric recognition techniques. In the second geometric stage, the sketch is segmented, and sent to shape-specific recognizers. The sketches are matched against predefined shape descriptions, and confidence values are calculated. The system outputs a list of classes that the sketch could be classified as, along with the accuracy, and precision for each sketch. This process both significantly reduces the time taken to classify such huge datasets of sketches, and increases both the accuracy and precision of the recognition
- …