10,655 research outputs found

    3D Modeling from Multiple Views with Integrated Registration and Data Fusion

    Full text link
    This paper presents an integrated modeling system capable of generating coloured three dimensional representations of a scene observed from multiple viewpoints. Emphasis is given to the integration of the components and to the algorithms used for acquisition, registration and final surface mapping. First, a sensor operating with structured light is used to acquire 3D and colour data of a scene from multiple views. Second, a frequency-domain based registration algorithm computes the transformation between pairs of views from the raw measurements and without a priori knowledge on the transformation parameters. Finally, the registered views are merged together and refined to create a rich 3D model of the objects. Real world modeling examples are presented and analyzed to validate the operation of the proposed integrated modeling system

    3D scanning of cultural heritage with consumer depth cameras

    Get PDF
    Three dimensional reconstruction of cultural heritage objects is an expensive and time-consuming process. Recent consumer real-time depth acquisition devices, like Microsoft Kinect, allow very fast and simple acquisition of 3D views. However 3D scanning with such devices is a challenging task due to the limited accuracy and reliability of the acquired data. This paper introduces a 3D reconstruction pipeline suited to use consumer depth cameras as hand-held scanners for cultural heritage objects. Several new contributions have been made to achieve this result. They include an ad-hoc filtering scheme that exploits the model of the error on the acquired data and a novel algorithm for the extraction of salient points exploiting both depth and color data. Then the salient points are used within a modified version of the ICP algorithm that exploits both geometry and color distances to precisely align the views even when geometry information is not sufficient to constrain the registration. The proposed method, although applicable to generic scenes, has been tuned to the acquisition of sculptures and in this connection its performance is rather interesting as the experimental results indicate

    2.5D multi-view gait recognition based on point cloud registration

    Get PDF
    This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Three-dimensional reconstruction of the tissue-specific multielemental distribution within Ceriodaphnia dubia via multimodal registration using laser ablation ICP-mass spectrometry and X-ray spectroscopic techniques

    Get PDF
    In this work, the three-dimensional elemental, distribution profile within the freshwater crustacean Ceriodaphnia dubia was constructed at a spatial resolution down to S mu m via a data, fusion approach employing state-of-the-art laser ablation inductively coupled plasma-time-of-flight mass spectrometry (LAICP-TOFMS) and laboratory-based absorption microcomputed tomography (mu-CT). C. dubia was exposed to elevated Cu, Ni, and Zn concentrations, chemically fixed, dehydrated, stained, and embedded, prior to mu-CT analysis. Subsequently, the sample was cut into 5 pm thin sections that were subjected to LA-ICPTOFMS imaging. Multimodal image registration was performed to spatially align the 2D LA-ICP-TOFMS images relative to the Corresponding slices of the 3D mu-CT reconstruction. Mass channels corresponding to the isotopes of a single element were merged to improve the signal-to-noise ratios within the elemental images. In order to aid the visual interpretation of the data, LA-ICP-TOEMS data wete projected onto the mu-CT voxels representing tissue. Additionally, the image resolution and elemental sensitivity were compared to those obtained with synchrotron radiation based 3D confocal mu-X-ray fluorescence imaging upon a chemically fixed and air-dried C. dubia specimen
    • …
    corecore