1,355 research outputs found

    Perceiving environmental structure from optical motion

    Get PDF
    Generally speaking, one of the most important sources of optical information about environmental structure is known to be the deforming optical patterns produced by the movements of the observer (pilot) or environmental objects. As an observer moves through a rigid environment, the projected optical patterns of environmental objects are systematically transformed according to their orientations and positions in 3D space relative to those of the observer. The detailed characteristics of these deforming optical patterns carry information about the 3D structure of the objects and about their locations and orientations relative to those of the observer. The specific geometrical properties of moving images that may constitute visually detected information about the shapes and locations of environmental objects is examined

    3D Shape Reconstruction from Sketches via Multi-view Convolutional Networks

    Full text link
    We propose a method for reconstructing 3D shapes from 2D sketches in the form of line drawings. Our method takes as input a single sketch, or multiple sketches, and outputs a dense point cloud representing a 3D reconstruction of the input sketch(es). The point cloud is then converted into a polygon mesh. At the heart of our method lies a deep, encoder-decoder network. The encoder converts the sketch into a compact representation encoding shape information. The decoder converts this representation into depth and normal maps capturing the underlying surface from several output viewpoints. The multi-view maps are then consolidated into a 3D point cloud by solving an optimization problem that fuses depth and normals across all viewpoints. Based on our experiments, compared to other methods, such as volumetric networks, our architecture offers several advantages, including more faithful reconstruction, higher output surface resolution, better preservation of topology and shape structure.Comment: 3DV 2017 (oral

    Scaled, patient-specific 3D vertebral model reconstruction based on 2D lateral fluoroscopy

    Get PDF
    Backgrounds: Accurate three-dimensional (3D) models of lumbar vertebrae are required for image-based 3D kinematics analysis. MRI or CT datasets are frequently used to derive 3D models but have the disadvantages that they are expensive, time-consuming or involving ionizing radiation (e.g., CT acquisition). An alternative method using 2D lateral fluoroscopy was developed. Materials and methods: A technique was developed to reconstruct a scaled 3D lumbar vertebral model from a single two-dimensional (2D) lateral fluoroscopic image and a statistical shape model of the lumbar vertebrae. Four cadaveric lumbar spine segments and two statistical shape models were used for testing. Reconstruction accuracy was determined by comparison of the surface models reconstructed from the single lateral fluoroscopic images to the ground truth data from 3D CT segmentation. For each case, two different surface-based registration techniques were used to recover the unknown scale factor, and the rigid transformation between the reconstructed surface model and the ground truth model before the differences between the two discrete surface models were computed. Results: Successful reconstruction of scaled surface models was achieved for all test lumbar vertebrae based on single lateral fluoroscopic images. The mean reconstruction error was between 0.7 and 1.6mm. Conclusions: A scaled, patient-specific surface model of the lumbar vertebra from a single lateral fluoroscopic image can be synthesized using the present approach. This new method for patient-specific 3D modeling has potential applications in spine kinematics analysis, surgical planning, and navigatio

    Robust Temporally Coherent Laplacian Protrusion Segmentation of 3D Articulated Bodies

    Get PDF
    In motion analysis and understanding it is important to be able to fit a suitable model or structure to the temporal series of observed data, in order to describe motion patterns in a compact way, and to discriminate between them. In an unsupervised context, i.e., no prior model of the moving object(s) is available, such a structure has to be learned from the data in a bottom-up fashion. In recent times, volumetric approaches in which the motion is captured from a number of cameras and a voxel-set representation of the body is built from the camera views, have gained ground due to attractive features such as inherent view-invariance and robustness to occlusions. Automatic, unsupervised segmentation of moving bodies along entire sequences, in a temporally-coherent and robust way, has the potential to provide a means of constructing a bottom-up model of the moving body, and track motion cues that may be later exploited for motion classification. Spectral methods such as locally linear embedding (LLE) can be useful in this context, as they preserve "protrusions", i.e., high-curvature regions of the 3D volume, of articulated shapes, while improving their separation in a lower dimensional space, making them in this way easier to cluster. In this paper we therefore propose a spectral approach to unsupervised and temporally-coherent body-protrusion segmentation along time sequences. Volumetric shapes are clustered in an embedding space, clusters are propagated in time to ensure coherence, and merged or split to accommodate changes in the body's topology. Experiments on both synthetic and real sequences of dense voxel-set data are shown. This supports the ability of the proposed method to cluster body-parts consistently over time in a totally unsupervised fashion, its robustness to sampling density and shape quality, and its potential for bottom-up model constructionComment: 31 pages, 26 figure

    Mapping vesicle shapes into the phase diagram: A comparison of experiment and theory

    Full text link
    Phase-contrast microscopy is used to monitor the shapes of micron-scale fluid-phase phospholipid-bilayer vesicles in aqueous solution. At fixed temperature, each vesicle undergoes thermal shape fluctuations. We are able experimentally to characterize the thermal shape ensemble by digitizing the vesicle outline in real time and storing the time-sequence of images. Analysis of this ensemble using the area-difference-elasticity (ADE) model of vesicle shapes allows us to associate (map) each time-sequence to a point in the zero-temperature (shape) phase diagram. Changing the laboratory temperature modifies the control parameters (area, volume, etc.) of each vesicle, so it sweeps out a trajectory across the theoretical phase diagram. It is a nontrivial test of the ADE model to check that these trajectories remain confined to regions of the phase diagram where the corresponding shapes are locally stable. In particular, we study the thermal trajectories of three prolate vesicles which, upon heating, experienced a mechanical instability leading to budding. We verify that the position of the observed instability and the geometry of the budded shape are in reasonable accord with the theoretical predictions. The inability of previous experiments to detect the ``hidden'' control parameters (relaxed area difference and spontaneous curvature) make this the first direct quantitative confrontation between vesicle-shape theory and experiment.Comment: submitted to PRE, LaTeX, 26 pages, 11 ps-fi

    Virtual Visual Hulls: Example-Based 3D Shape Estimation from a Single Silhouette

    Get PDF
    Recovering a volumetric model of a person, car, or other object of interest from a single snapshot would be useful for many computer graphics applications. 3D model estimation in general is hard, and currently requires active sensors, multiple views, or integration over time. For a known object class, however, 3D shape can be successfully inferred from a single snapshot. We present a method for generating a ``virtual visual hull''-- an estimate of the 3D shape of an object from a known class, given a single silhouette observed from an unknown viewpoint. For a given class, a large database of multi-view silhouette examples from calibrated, though possibly varied, camera rigs are collected. To infer a novel single view input silhouette's virtual visual hull, we search for 3D shapes in the database which are most consistent with the observed contour. The input is matched to component single views of the multi-view training examples. A set of viewpoint-aligned virtual views are generated from the visual hulls corresponding to these examples. The 3D shape estimate for the input is then found by interpolating between the contours of these aligned views. When the underlying shape is ambiguous given a single view silhouette, we produce multiple visual hull hypotheses; if a sequence of input images is available, a dynamic programming approach is applied to find the maximum likelihood path through the feasible hypotheses over time. We show results of our algorithm on real and synthetic images of people

    Deformable Neural Radiance Fields using RGB and Event Cameras

    Full text link
    Modeling Neural Radiance Fields for fast-moving deformable objects from visual data alone is a challenging problem. A major issue arises due to the high deformation and low acquisition rates. To address this problem, we propose to use event cameras that offer very fast acquisition of visual change in an asynchronous manner. In this work, we develop a novel method to model the deformable neural radiance fields using RGB and event cameras. The proposed method uses the asynchronous stream of events and calibrated sparse RGB frames. In our setup, the camera pose at the individual events required to integrate them into the radiance fields remains unknown. Our method jointly optimizes these poses and the radiance field. This happens efficiently by leveraging the collection of events at once and actively sampling the events during learning. Experiments conducted on both realistically rendered graphics and real-world datasets demonstrate a significant benefit of the proposed method over the state-of-the-art and the compared baseline. This shows a promising direction for modeling deformable neural radiance fields in real-world dynamic scenes
    corecore