1,510 research outputs found
Extracting curve-skeletons from digital shapes using occluding contours
Curve-skeletons are compact and semantically relevant shape descriptors, able to summarize both topology and pose of a wide range of digital objects. Most of the state-of-the-art algorithms for their computation rely on the type of geometric primitives used and sampling frequency. In this paper we introduce a formally sound and intuitive definition of curve-skeleton, then we propose a novel method for skeleton extraction that rely on the visual appearance of the shapes. To achieve this result we inspect the properties of occluding contours, showing how information about the symmetry axes of a 3D shape can be inferred by a small set of its planar projections. The proposed method is fast, insensitive to noise, capable of working with different shape representations, resolution insensitive and easy to implement
Towards multiple 3D bone surface identification and reconstruction using few 2D X-ray images for intraoperative applications
This article discusses a possible method to use a small number, e.g. 5, of conventional 2D X-ray images to reconstruct multiple 3D bone surfaces intraoperatively. Each boneâs edge contours in X-ray images are automatically identified. Sparse 3D landmark points of each bone are automatically reconstructed by pairing the 2D X-ray images. The reconstructed landmark point distribution on a surface is approximately optimal covering main characteristics of the surface. A statistical shape model, dense point distribution model (DPDM), is then used to fit the reconstructed optimal landmarks vertices to reconstruct a full surface of each bone separately. The reconstructed surfaces can then be visualised and manipulated by surgeons or used by surgical robotic systems
MVPNet: Multi-View Point Regression Networks for 3D Object Reconstruction from A Single Image
In this paper, we address the problem of reconstructing an object's surface
from a single image using generative networks. First, we represent a 3D surface
with an aggregation of dense point clouds from multiple views. Each point cloud
is embedded in a regular 2D grid aligned on an image plane of a viewpoint,
making the point cloud convolution-favored and ordered so as to fit into deep
network architectures. The point clouds can be easily triangulated by
exploiting connectivities of the 2D grids to form mesh-based surfaces. Second,
we propose an encoder-decoder network that generates such kind of multiple
view-dependent point clouds from a single image by regressing their 3D
coordinates and visibilities. We also introduce a novel geometric loss that is
able to interpret discrepancy over 3D surfaces as opposed to 2D projective
planes, resorting to the surface discretization on the constructed meshes. We
demonstrate that the multi-view point regression network outperforms
state-of-the-art methods with a significant improvement on challenging
datasets.Comment: 8 pages; accepted by AAAI 201
Part Description and Segmentation Using Contour, Surface and Volumetric Primitives
The problem of part definition, description, and decomposition is central to the shape recognition systems. The Ultimate goal of segmenting range images into meaningful parts and objects has proved to be very difficult to realize, mainly due to the isolation of the segmentation problem from the issue of representation. We propose a paradigm for part description and segmentation by integration of contour, surface, and volumetric primitives. Unlike previous approaches, we have used geometric properties derived from both boundary-based (surface contours and occluding contours), and primitive-based (quadric patches and superquadric models) representations to define and recover part-whole relationships, without a priori knowledge about the objects or object domain. The object shape is described at three levels of complexity, each contributing to the overall shape. Our approach can be summarized as answering the following question : Given that we have all three different modules for extracting volume, surface and boundary properties, how should they be invoked, evaluated and integrated? Volume and boundary fitting, and surface description are performed in parallel to incorporate the best of the coarse to fine and fine to coarse segmentation strategy. The process involves feedback between the segmentor (the Control Module) and individual shape description modules. The control module evaluates the intermediate descriptions and formulates hypotheses about parts. Hypotheses are further tested by the segmentor and the descriptors. The descriptions thus obtained are independent of position, orientation, scale, domain and domain properties, and are based purely on geometric considerations. They are extremely useful for the high level domain dependent symbolic reasoning processes, which need not deal with tremendous amount of data, but only with a rich description of data in terms of primitives recovered at various levels of complexity
Separate cortical stages in amodal completion revealed by functional magnetic resonance adaptation : research article
Background Objects in our environment are often partly occluded, yet we effortlessly perceive them as whole and complete. This phenomenon is called visual amodal completion. Psychophysical investigations suggest that the process of completion starts from a representation of the (visible) physical features of the stimulus and ends with a completed representation of the stimulus. The goal of our study was to investigate both stages of the completion process by localizing both brain regions involved in processing the physical features of the stimulus as well as brain regions representing the completed stimulus. Results Using fMRI adaptation we reveal clearly distinct regions in the visual cortex of humans involved in processing of amodal completion: early visual cortex - presumably V1 - processes the local contour information of the stimulus whereas regions in the inferior temporal cortex represent the completed shape. Furthermore, our data suggest that at the level of inferior temporal cortex information regarding the original local contour information is not preserved but replaced by the representation of the amodally completed percept. Conclusion These findings provide neuroimaging evidence for a multiple step theory of amodal completion and further insights into the neuronal correlates of visual perception
Depth Enhancement and Surface Reconstruction with RGB/D Sequence
Surface reconstruction and 3D modeling is a challenging task, which has been explored for decades by the computer vision, computer graphics, and machine learning communities. It is fundamental to many applications such as robot navigation, animation and scene understanding, industrial control and medical diagnosis. In this dissertation, I take advantage of the consumer depth sensors for surface reconstruction. Considering its limited performance on capturing detailed surface geometry, a depth enhancement approach is proposed in the first place to recovery small and rich geometric details with captured depth and color sequence. In addition to enhancing its spatial resolution, I present a hybrid camera to improve the temporal resolution of consumer depth sensor and propose an optimization framework to capture high speed motion and generate high speed depth streams. Given the partial scans from the depth sensor, we also develop a novel fusion approach to build up complete and watertight human models with a template guided registration method. Finally, the problem of surface reconstruction for non-Lambertian objects, on which the current depth sensor fails, is addressed by exploiting multi-view images captured with a hand-held color camera and we propose a visual hull based approach to recovery the 3D model
A topological solution to object segmentation and tracking
The world is composed of objects, the ground, and the sky. Visual perception
of objects requires solving two fundamental challenges: segmenting visual input
into discrete units, and tracking identities of these units despite appearance
changes due to object deformation, changing perspective, and dynamic occlusion.
Current computer vision approaches to segmentation and tracking that approach
human performance all require learning, raising the question: can objects be
segmented and tracked without learning? Here, we show that the mathematical
structure of light rays reflected from environment surfaces yields a natural
representation of persistent surfaces, and this surface representation provides
a solution to both the segmentation and tracking problems. We describe how to
generate this surface representation from continuous visual input, and
demonstrate that our approach can segment and invariantly track objects in
cluttered synthetic video despite severe appearance changes, without requiring
learning.Comment: 21 pages, 6 main figures, 3 supplemental figures, and supplementary
material containing mathematical proof
- âŠ