7,770 research outputs found
SlicerAstro: a 3-D interactive visual analytics tool for HI data
SKA precursors are capable of detecting hundreds of galaxies in HI in a
single 12 hours pointing. In deeper surveys one will probe more easily faint HI
structures, typically located in the vicinity of galaxies, such as tails,
filaments, and extraplanar gas. The importance of interactive visualization has
proven to be fundamental for the exploration of such data as it helps users to
receive immediate feedback when manipulating the data. We have developed
SlicerAstro, a 3-D interactive viewer with new analysis capabilities, based on
traditional 2-D input/output hardware. These capabilities enhance the data
inspection, allowing faster analysis of complex sources than with traditional
tools. SlicerAstro is an open-source extension of 3DSlicer, a multi-platform
open source software package for visualization and medical image processing.
We demonstrate the capabilities of the current stable binary release of
SlicerAstro, which offers the following features: i) handling of FITS files and
astronomical coordinate systems; ii) coupled 2-D/3-D visualization; iii)
interactive filtering; iv) interactive 3-D masking; v) and interactive 3-D
modeling. In addition, SlicerAstro has been designed with a strong, stable and
modular C++ core, and its classes are also accessible via Python scripting,
allowing great flexibility for user-customized visualization and analysis
tasks.Comment: 18 pages, 11 figures, Accepted by Astronomy and Computing.
SlicerAstro link: https://github.com/Punzo/SlicerAstro/wiki#get-slicerastr
Static/Dynamic Filtering for Mesh Geometry
The joint bilateral filter, which enables feature-preserving signal smoothing
according to the structural information from a guidance, has been applied for
various tasks in geometry processing. Existing methods either rely on a static
guidance that may be inconsistent with the input and lead to unsatisfactory
results, or a dynamic guidance that is automatically updated but sensitive to
noises and outliers. Inspired by recent advances in image filtering, we propose
a new geometry filtering technique called static/dynamic filter, which utilizes
both static and dynamic guidances to achieve state-of-the-art results. The
proposed filter is based on a nonlinear optimization that enforces smoothness
of the signal while preserving variations that correspond to features of
certain scales. We develop an efficient iterative solver for the problem, which
unifies existing filters that are based on static or dynamic guidances. The
filter can be applied to mesh face normals followed by vertex position update,
to achieve scale-aware and feature-preserving filtering of mesh geometry. It
also works well for other types of signals defined on mesh surfaces, such as
texture colors. Extensive experimental results demonstrate the effectiveness of
the proposed filter for various geometry processing applications such as mesh
denoising, geometry feature enhancement, and texture color filtering
Visualization techniques to aid in the analysis of multi-spectral astrophysical data sets
This report describes our project activities for the period Sep. 1991 - Oct. 1992. Our activities included stabilizing the software system STAR, porting STAR to IDL/widgets (improved user interface), targeting new visualization techniques for multi-dimensional data visualization (emphasizing 3D visualization), and exploring leading-edge 3D interface devices. During the past project year we emphasized high-end visualization techniques, by exploring new tools offered by state-of-the-art visualization software (such as AVS3 and IDL4/widgets), by experimenting with tools still under research at the Department of Computer Science (e.g., use of glyphs for multidimensional data visualization), and by researching current 3D input/output devices as they could be used to explore 3D astrophysical data. As always, any project activity is driven by the need to interpret astrophysical data more effectively
Recommended from our members
Explainable and Advisable Learning for Self-driving Vehicles
Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-to-interpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers, etc., can understand what triggered a particular behavior. Explanations may be triggered by the neural controller, namely introspective explanations, or informed by the neural controller's output, namely rationalizations. Our work has focused on the challenge of generating introspective explanations of deep models for self-driving vehicles. In Chapter 3, we begin by exploring the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network's output (steering control). In the first stage, we use a visual attention model to train a convolution network end-to-end from images to steering angle. The attention model highlights image regions that potentially influence the network's output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network's behavior. In Chapter 4, we add an attention-based video-to-text model to produce textual explanations of model actions, e.g. "the car slows down because the road is wet". The attention maps of controller and explanation model are aligned so that explanations are grounded in the parts of the scene that mattered to the controller. We explore two approaches to attention alignment, strong- and weak-alignment. These explainable systems represent an externalization of tacit knowledge. The network's opaque reasoning is simplified to a situation-specific dependence on a visible object in the image. This makes them brittle and potentially unsafe in situations that do not match training data. In Chapter 5, we propose to address this issue by augmenting training data with natural language advice from a human. Advice includes guidance about what to do and where to attend. We present the first step toward advice-giving, where we train an end-to-end vehicle controller that accepts advice. The controller adapts the way it attends to the scene (visual attention) and the control (steering and speed). Further, in Chapter 6, we propose a new approach that learns vehicle control with the help of long-term (global) human advice. Specifically, our system learns to summarize its visual observations in natural language, predict an appropriate action response (e.g. "I see a pedestrian crossing, so I stop"), and predict the controls, accordingly
- …