90,369 research outputs found

    Reconstruction of the 3D Object Model: A review

    Get PDF
    The three-dimensional (3D) reconstruction model of a real object is useful in many applications ranging from medical imaging, product design, parts inspection, reverse engineering to rapid prototyping. In the medical field, imaging techniques such as computed tomography (CT), magnetic resonance imaging (MRI) and single positron emission tomography (SPECT) are applied to create 3D images from emanation measurements for disease diagnoses and organ study. On the other hand, reconstruction is widely utilized to redesign manufacturing parts in order to save production cost and time. A typical reconstruction application consists of three major steps, which are data acquisition, registration and integration as well as surface fitting. Based on the nature of data captured, the 3D reconstruction model can be categorized into two groups: methods working on (i) two-dimensional (2D) images and (ii) sets of 3D points. This paper reviews different methods of 3D object model reconstruction and techniques subjected to each method

    Analysis of mobile laser scanning data and multi-view image reconstruction

    Get PDF
    The combination of laser scanning (LS, active, direct 3D measurement of the object surface) and photogrammetry (high geometric and radiometric resolution) is widely applied for object reconstruction (e.g. architecture, topography, monitoring, archaeology). Usually the results are a coloured point cloud or a textured mesh. The geometry is typically generated from the laser scanning point cloud and the radiometric information is the result of image acquisition. In the last years, next to significant developments in static (terrestrial LS) and kinematic LS (airborne and mobile LS) hardware and software, research in computer vision and photogrammetry lead to advanced automated procedures in image orientation and image matching. These methods allow a highly automated generation of 3D geometry just based on image data. Founded on advanced feature detector techniques (like SIFT (Scale Invariant Feature Transform)) very robust techniques for image orientation were established (cf. Bundler). In a subsequent step, dense multi-view stereo reconstruction algorithms allow the generation of very dense 3D point clouds that represent the scene geometry (cf. Patch-based Multi-View Stereo (PMVS2)). Within this paper the usage of mobile laser scanning (MLS) and simultaneously acquired image data for an advanced integrated scene reconstruction is studied. For the analysis the geometry of a scene is generated by both techniques independently. Then, the paper focuses on the quality assessment of both techniques. This includes a quality analysis of the individual surface models and a comparison of the direct georeferencing of the images using positional and orientation data of the on board GNSS-INS system and the indirect georeferencing of the imagery by automatic image orientation. For the practical evaluation a dataset from an archaeological monument is utilised. Based on the gained knowledge a discussion of the results is provided and a future strategy for the integration of both techniques is proposed

    Special Issue “Remote Sensing in Applied Geophysics”

    Get PDF
    The Special Issue "Remote Sensing in Applied Geophysics" is focused on recent and upcoming advances in the combined application of remote sensing and applied geophysics techniques, sharing the advantages of being non-invasive research methods, suitable for surface and near-surface high-resolution investigations of even wide and remote areas. Applied geophysics analyzes the distribution of physical properties in the subsurface for a wide range of geological, engineering and environmental applications at different scales. Geophysical surveys are usually carried out deploying or moving the appropriate instrumentation directly on the ground surface. However, recent technological advances have brought to the development of innovative acquisition systems more typical of the remote sensing community (e.g., airborne surveys and unmanned aerial vehicle systems). At the same time, while applied geophysics mainly focuses on the subsurface, typical remote sensing techniques have the ability to accurately image the Earth's surface with high-resolution investigations carried out by means of terrestrial, airborne, or satellite-based platforms. The integration of surface and subsurface information is often crucial for several purposes, including the georeferencing and processing of geophysical data, the characterization and time-lapse monitoring of surface and near-surface targets, and the reconstruction of highly detailed and comprehensive 3D models of the investigated areas. Contributions to the issue showing the added value of surface reconstruction and/or monitoring in the processing and interpretation of geophysical data, integration and cross-comparison of geophysical and remote sensing techniques were required to the research community. Contributions discussing the results of pioneering geophysical acquisitions by means of innovative remote systems were also addressed as interesting topics. The Special Issue received great attention in the combined community of applied geophysicists and remote sensing researchers. A total of 15 papers are included in the Special Issue, covering a wide range of applications. This is one of the highest number of papers among the Remote Sensing Special Issues, showing great interest in the proposed topic. The relevant number of contributions also highlights the relevance and increasing need for integration between remote sensing and ground-based geophysical exploration or monitoring methods. In particular, one of the main fields of research showing the potential integration of the geophysical and remote sensing techniques is archaeological exploration

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Challenges in shallow target reconstruction by 3D elastic full-waveform inversion - Which initial model?

    Get PDF
    Elastic full-waveform inversion (FWI) is a powerful tool for high-resolution subsurface multiparameter characterization. However, 3D FWI applied to land data for near-surface applications is particularly challenging because the seismograms are dominated by highly energetic, dispersive, and complex-scattered surface waves (SWs). In these conditions, a successful deterministic FWI scheme requires an accurate initial model. Our study, primarily focused on field data analysis for 3D applications, aims at enhancing the resolution in the imaging of complex shallow targets, by integrating devoted SW analysis techniques with a 3D spectral-element-based elastic FWI. From dispersion curves, extracted from seismic data recorded over a sharp-interface shallow target, we build different initial S-wave (VS) and P-wave (VP) velocity models (laterally homogeneous and laterally variable), using a specific data transform. Starting from these models, we carry out 3D FWI tests on synthetic and field data, using a relatively straightforward inversion scheme. The field data processing before FWI consists of band-pass filtering and muting of noisy traces. During FWI, a weighting function is applied to the far-offset traces. We test 2D and 3D acquisition layouts, with different positions of the sources and variable offsets. The 3D FWI workflow enriches the overall content of the initial models, allowing a reliable reconstruction of the shallow target, especially when using laterally variable initial models. Moreover, a 3D acquisition layout guarantees a better reconstruction of the target's shape and lateral extension. In addition, the integration of model-oriented (preliminary monoparametric FWI) and data-oriented (time windowing) strategies into the main optimization scheme has produced further improvement of the FWI results

    C-blox: A Scalable and Consistent TSDF-based Dense Mapping Approach

    Full text link
    In many applications, maintaining a consistent dense map of the environment is key to enabling robotic platforms to perform higher level decision making. Several works have addressed the challenge of creating precise dense 3D maps from visual sensors providing depth information. However, during operation over longer missions, reconstructions can easily become inconsistent due to accumulated camera tracking error and delayed loop closure. Without explicitly addressing the problem of map consistency, recovery from such distortions tends to be difficult. We present a novel system for dense 3D mapping which addresses the challenge of building consistent maps while dealing with scalability. Central to our approach is the representation of the environment as a collection of overlapping TSDF subvolumes. These subvolumes are localized through feature-based camera tracking and bundle adjustment. Our main contribution is a pipeline for identifying stable regions in the map, and to fuse the contributing subvolumes. This approach allows us to reduce map growth while still maintaining consistency. We demonstrate the proposed system on a publicly available dataset and simulation engine, and demonstrate the efficacy of the proposed approach for building consistent and scalable maps. Finally we demonstrate our approach running in real-time on-board a lightweight MAV.Comment: 8 pages, 5 figures, conferenc

    Reconstruction of a function from its spherical (circular) means with the centers lying on the surface of certain polygons and polyhedra

    Full text link
    We present explicit filtration/backprojection-type formulae for the inversion of the spherical (circular) mean transform with the centers lying on the boundary of some polyhedra (or polygons, in 2D). The formulae are derived using the double layer potentials for the wave equation, for the domains with certain symmetries. The formulae are valid for a rectangle and certain triangles in 2D, and for a cuboid, certain right prisms and a certain pyramid in 3D. All the present inversion formulae yield exact reconstruction within the domain surrounded by the acquisition surface even in the presence of exterior sources.Comment: 9 figure

    Creating Simplified 3D Models with High Quality Textures

    Get PDF
    This paper presents an extension to the KinectFusion algorithm which allows creating simplified 3D models with high quality RGB textures. This is achieved through (i) creating model textures using images from an HD RGB camera that is calibrated with Kinect depth camera, (ii) using a modified scheme to update model textures in an asymmetrical colour volume that contains a higher number of voxels than that of the geometry volume, (iii) simplifying dense polygon mesh model using quadric-based mesh decimation algorithm, and (iv) creating and mapping 2D textures to every polygon in the output 3D model. The proposed method is implemented in real-time by means of GPU parallel processing. Visualization via ray casting of both geometry and colour volumes provides users with a real-time feedback of the currently scanned 3D model. Experimental results show that the proposed method is capable of keeping the model texture quality even for a heavily decimated model and that, when reconstructing small objects, photorealistic RGB textures can still be reconstructed.Comment: 2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Page 1 -
    corecore