8,049 research outputs found
Multiple-sensor integration for efficient reverse engineering of geometry
This paper describes a multi-sensor measuring system for reverse engineering applications. A sphere-plate artefact is developed for data unification of the hybrid system. With the coordinate data acquired using the optical system, intelligent feature recognition and segmentation algorithms can be applied to extract the global surface information of the object. The coordinate measuring machine (CMM) is used to re-measure the geometric features with a small amount of sampling points and the obtained information can be subsequently used to compensate the point data patches which are measured by optical system. Then the optimized point data can be exploited for accurate reverse engineering of CAD model. The limitations of each measurement system are compensated by the other. Experimental results validate the accuracy and effectiveness of this data optimization approach
DeepContext: Context-Encoding Neural Pathways for 3D Holistic Scene Understanding
While deep neural networks have led to human-level performance on computer
vision tasks, they have yet to demonstrate similar gains for holistic scene
understanding. In particular, 3D context has been shown to be an extremely
important cue for scene understanding - yet very little research has been done
on integrating context information with deep models. This paper presents an
approach to embed 3D context into the topology of a neural network trained to
perform holistic scene understanding. Given a depth image depicting a 3D scene,
our network aligns the observed scene with a predefined 3D scene template, and
then reasons about the existence and location of each object within the scene
template. In doing so, our model recognizes multiple objects in a single
forward pass of a 3D convolutional neural network, capturing both global scene
and local object information simultaneously. To create training data for this
3D network, we generate partly hallucinated depth images which are rendered by
replacing real objects with a repository of CAD models of the same object
category. Extensive experiments demonstrate the effectiveness of our algorithm
compared to the state-of-the-arts. Source code and data are available at
http://deepcontext.cs.princeton.edu.Comment: Accepted by ICCV201
Parallel Hierarchical Affinity Propagation with MapReduce
The accelerated evolution and explosion of the Internet and social media is
generating voluminous quantities of data (on zettabyte scales). Paramount
amongst the desires to manipulate and extract actionable intelligence from vast
big data volumes is the need for scalable, performance-conscious analytics
algorithms. To directly address this need, we propose a novel MapReduce
implementation of the exemplar-based clustering algorithm known as Affinity
Propagation. Our parallelization strategy extends to the multilevel
Hierarchical Affinity Propagation algorithm and enables tiered aggregation of
unstructured data with minimal free parameters, in principle requiring only a
similarity measure between data points. We detail the linear run-time
complexity of our approach, overcoming the limiting quadratic complexity of the
original algorithm. Experimental validation of our clustering methodology on a
variety of synthetic and real data sets (e.g. images and point data)
demonstrates our competitiveness against other state-of-the-art MapReduce
clustering techniques
Automatic Objects Removal for Scene Completion
With the explosive growth of web-based cameras and mobile devices, billions
of photographs are uploaded to the internet. We can trivially collect a huge
number of photo streams for various goals, such as 3D scene reconstruction and
other big data applications. However, this is not an easy task due to the fact
the retrieved photos are neither aligned nor calibrated. Furthermore, with the
occlusion of unexpected foreground objects like people, vehicles, it is even
more challenging to find feature correspondences and reconstruct realistic
scenes. In this paper, we propose a structure based image completion algorithm
for object removal that produces visually plausible content with consistent
structure and scene texture. We use an edge matching technique to infer the
potential structure of the unknown region. Driven by the estimated structure,
texture synthesis is performed automatically along the estimated curves. We
evaluate the proposed method on different types of images: from highly
structured indoor environment to the natural scenes. Our experimental results
demonstrate satisfactory performance that can be potentially used for
subsequent big data processing: 3D scene reconstruction and location
recognition.Comment: 6 pages, IEEE International Conference on Computer Communications
(INFOCOM 14), Workshop on Security and Privacy in Big Data, Toronto, Canada,
201
- …