16,161 research outputs found
Prototyping Information Visualization in 3D City Models: a Model-based Approach
When creating 3D city models, selecting relevant visualization techniques is
a particularly difficult user interface design task. A first obstacle is that
current geodata-oriented tools, e.g. ArcGIS, have limited 3D capabilities and
limited sets of visualization techniques. Another important obstacle is the
lack of unified description of information visualization techniques for 3D city
models. If many techniques have been devised for different types of data or
information (wind flows, air quality fields, historic or legal texts, etc.)
they are generally described in articles, and not really formalized. In this
paper we address the problem of visualizing information in (rich) 3D city
models by presenting a model-based approach for the rapid prototyping of
visualization techniques. We propose to represent visualization techniques as
the composition of graph transformations. We show that these transformations
can be specified with SPARQL construction operations over RDF graphs. These
specifications can then be used in a prototype generator to produce 3D scenes
that contain the 3D city model augmented with data represented using the
desired technique.Comment: Proc. of 3DGeoInfo 2014 Conference, Dubai, November 201
A review of data visualization: opportunities in manufacturing sequence management.
Data visualization now benefits from developments in technologies that offer innovative ways of presenting complex data. Potentially these have widespread application in communicating the complex information domains typical of manufacturing sequence management environments for global enterprises. In this paper the authors review the visualization functionalities, techniques and applications reported in literature, map these to manufacturing sequence information presentation requirements and identify the opportunities available and likely development paths. Current leading-edge practice in dynamic updating and communication with suppliers is not being exploited in manufacturing sequence management; it could provide significant benefits to manufacturing business. In the context of global manufacturing operations and broad-based user communities with differing needs served by common data sets, tool functionality is generally ahead of user application
DeepICP: An End-to-End Deep Neural Network for 3D Point Cloud Registration
We present DeepICP - a novel end-to-end learning-based 3D point cloud
registration framework that achieves comparable registration accuracy to prior
state-of-the-art geometric methods. Different from other keypoint based methods
where a RANSAC procedure is usually needed, we implement the use of various
deep neural network structures to establish an end-to-end trainable network.
Our keypoint detector is trained through this end-to-end structure and enables
the system to avoid the inference of dynamic objects, leverages the help of
sufficiently salient features on stationary objects, and as a result, achieves
high robustness. Rather than searching the corresponding points among existing
points, the key contribution is that we innovatively generate them based on
learned matching probabilities among a group of candidates, which can boost the
registration accuracy. Our loss function incorporates both the local similarity
and the global geometric constraints to ensure all above network designs can
converge towards the right direction. We comprehensively validate the
effectiveness of our approach using both the KITTI dataset and the
Apollo-SouthBay dataset. Results demonstrate that our method achieves
comparable or better performance than the state-of-the-art geometry-based
methods. Detailed ablation and visualization analysis are included to further
illustrate the behavior and insights of our network. The low registration error
and high robustness of our method makes it attractive for substantial
applications relying on the point cloud registration task.Comment: 10 pages, 6 figures, 3 tables, typos corrected, experimental results
updated, accepted by ICCV 201
DIMAL: Deep Isometric Manifold Learning Using Sparse Geodesic Sampling
This paper explores a fully unsupervised deep learning approach for computing
distance-preserving maps that generate low-dimensional embeddings for a certain
class of manifolds. We use the Siamese configuration to train a neural network
to solve the problem of least squares multidimensional scaling for generating
maps that approximately preserve geodesic distances. By training with only a
few landmarks, we show a significantly improved local and nonlocal
generalization of the isometric mapping as compared to analogous non-parametric
counterparts. Importantly, the combination of a deep-learning framework with a
multidimensional scaling objective enables a numerical analysis of network
architectures to aid in understanding their representation power. This provides
a geometric perspective to the generalizability of deep learning.Comment: 10 pages, 11 Figure
The boundary coefficient : a vertex measure for visualizing and finding structure in weighted graphs
- …