5,760 research outputs found
Data-Driven Shape Analysis and Processing
Data-driven methods play an increasingly important role in discovering
geometric, structural, and semantic relationships between 3D shapes in
collections, and applying this analysis to support intelligent modeling,
editing, and visualization of geometric data. In contrast to traditional
approaches, a key feature of data-driven approaches is that they aggregate
information from a collection of shapes to improve the analysis and processing
of individual shapes. In addition, they are able to learn models that reason
about properties and relationships of shapes without relying on hard-coded
rules or explicitly programmed instructions. We provide an overview of the main
concepts and components of these techniques, and discuss their application to
shape classification, segmentation, matching, reconstruction, modeling and
exploration, as well as scene analysis and synthesis, through reviewing the
literature and relating the existing works with both qualitative and numerical
comparisons. We conclude our report with ideas that can inspire future research
in data-driven shape analysis and processing.Comment: 10 pages, 19 figure
FrameNet: Learning Local Canonical Frames of 3D Surfaces from a Single RGB Image
In this work, we introduce the novel problem of identifying dense canonical
3D coordinate frames from a single RGB image. We observe that each pixel in an
image corresponds to a surface in the underlying 3D geometry, where a canonical
frame can be identified as represented by three orthogonal axes, one along its
normal direction and two in its tangent plane. We propose an algorithm to
predict these axes from RGB. Our first insight is that canonical frames
computed automatically with recently introduced direction field synthesis
methods can provide training data for the task. Our second insight is that
networks designed for surface normal prediction provide better results when
trained jointly to predict canonical frames, and even better when trained to
also predict 2D projections of canonical frames. We conjecture this is because
projections of canonical tangent directions often align with local gradients in
images, and because those directions are tightly linked to 3D canonical frames
through projective geometry and orthogonality constraints. In our experiments,
we find that our method predicts 3D canonical frames that can be used in
applications ranging from surface normal estimation, feature matching, and
augmented reality
Curvature sensing by vision and touch
Brain representations of curvature may be formed on the basis of either
vision or touch. Experimental and theoretical work by the author and her
colleagues has shown that the processing underlying such representations
directly depends on specific two-dimensional geometric properties of the curved
object, and on the symmetry of curvature. Virtual representations of curves
with mirror symmetry were displayed in 2D on a computer screen to sighted
observers for visual scaling. For tactile (haptic) scaling, the physical
counterparts of these curves were placed in the two hands of sighted observers,
who were blindfolded during the sensing experiment, and of congenitally blind
observers, who never had any visual experience. All results clearly show that
curvature, whether haptically or visually sensed, is statistically linked to
the same curve properties. Sensation is expressed psychophysically as a power
function of any symmetrical curve's aspect ratio, a scale invariant geometric
property of physical objects. The results of the author's work support
biologically motivated models of sensory integration for curvature processing.
They also promote the idea of a universal power law for adaptive brain control
and balancing of motor responses to environmental stimuli across sensory
modalities
Quad Meshing
Triangle meshes have been nearly ubiquitous in computer graphics, and a large body of data structures and geometry processing algorithms based on them has been developed in the literature. At the same time, quadrilateral meshes, especially semi-regular ones, have advantages for many applications, and significant progress was made in quadrilateral mesh generation and processing during the last several years. In this State of the Art Report, we discuss the advantages and problems of techniques operating on quadrilateral meshes, including surface analysis and mesh quality, simplification, adaptive refinement, alignment with features, parametrization, and remeshing
Analysis and Manipulation of Repetitive Structures of Varying Shape
Self-similarity and repetitions are ubiquitous in man-made and natural objects. Such structural regularities often relate to form, function, aesthetics, and design considerations. Discovering structural redundancies along with their dominant variations from 3D geometry not only allows us to better understand the underlying objects, but is also beneficial for several geometry processing tasks including compact representation, shape completion, and intuitive shape manipulation. To identify these repetitions, we present a novel detection algorithm based on analyzing a graph of surface features. We combine general feature detection schemes with a RANSAC-based randomized subgraph searching algorithm in order to reliably detect recurring patterns of locally unique structures. A subsequent segmentation step based on a simultaneous region growing is applied to verify that the actual data supports the patterns detected in the feature graphs. We introduce our graph based detection algorithm on the example of rigid repetitive structure detection. Then we extend the approach to allow more general deformations between the detected parts. We introduce subspace symmetries whereby we characterize similarity by requiring the set of repeating structures to form a low dimensional shape space. We discover these structures based on detecting linearly correlated correspondences among graphs of invariant features. The found symmetries along with the modeled variations are useful for a variety of applications including non-local and non-rigid denoising. Employing subspace symmetries for shape editing, we introduce a morphable part model for smart shape manipulation. The input geometry is converted to an assembly of deformable parts with appropriate boundary conditions. Our method uses self-similarities from a single model or corresponding parts of shape collections as training input and allows the user also to reassemble the identified parts in new configurations, thus exploiting both the discrete and continuous learned variations while ensuring appropriate boundary conditions across part boundaries. We obtain an interactive yet intuitive shape deformation framework producing realistic deformations on classes of objects that are difficult to edit using repetition-unaware deformation techniques
Region-based saliency estimation for 3D shape analysis and understanding
The detection of salient regions is an important pre-processing step for many 3D shape analysis and understanding tasks. This paper proposes a novel method for saliency detection in 3D free form shapes. Firstly, we smooth the surface normals by a bilateral filter. Such a method is capable of smoothing the surfaces and retaining the local details. Secondly, a novel method is proposed for the estimation of the saliency value of each vertex. To this end, two new features are defined: Retinex-based Importance Feature (RIF) and Relative Normal Distance (RND). They are based on the human visual perception characteristics and surface geometry respectively. Since the vertex based method cannot guarantee that the detected salient regions are semantically continuous and complete, we propose to refine such values based on surface patches. The detected saliency is finally used to guide the existing techniques for mesh simplification, interest point detection, and overlapping point cloud registration. The comparative studies based on real data from three publicly accessible databases show that the proposed method usually outperforms five selected state of the art ones both qualitatively and quantitatively for saliency detection and 3D shape analysis and understanding
Perceiving environmental structure from optical motion
Generally speaking, one of the most important sources of optical information about environmental structure is known to be the deforming optical patterns produced by the movements of the observer (pilot) or environmental objects. As an observer moves through a rigid environment, the projected optical patterns of environmental objects are systematically transformed according to their orientations and positions in 3D space relative to those of the observer. The detailed characteristics of these deforming optical patterns carry information about the 3D structure of the objects and about their locations and orientations relative to those of the observer. The specific geometrical properties of moving images that may constitute visually detected information about the shapes and locations of environmental objects is examined
- …