4,951 research outputs found

    IMAGE BASED MODELING FROM SPHERICAL PHOTOGRAMMETRY AND STRUCTURE FOR MOTION. THE CASE OF THE TREASURY, NABATEAN ARCHITECTURE IN PETRA

    Get PDF
    This research deals with an efficient and low cost methodology to obtain a metric and photorealstic survey of a complex architecture. Photomodeling is an already tested interactive approach to produce a detailed and quick 3D model reconstruction. Photomodeling goes along with the creation of a rough surface over which oriented images can be back-projected in real time. Lastly the model can be enhanced checking the coincidence between the surface and the projected texture. The challenge of this research is to combine the advantages of two technologies already set up and used in many projects: spherical photogrammetry (Fangi, 2007,2008,2009,2010) and structure for motion (Photosynth web service and Bundler + CMVS2 + PMVS2). The input images are taken from the same points of view to form the set of panoramic photos paying attention to use well-suited projections: equirectangular for spherical photogrammetry and rectilinear for Photosynth web service. The performance of the spherical photogrammetry is already known in terms of its metric accuracy and acquisition quickness but time is required in the restitution step because of the manual homologous point recognition from different panoramas. In Photosynth instead the restitution is quick and automated: the provided point clouds are useful benchmarks to start with the model reconstruction even if lacking in details and scale. The proposed workflow needs of ad-hoc tools to capture high resolution rectilinear panoramic images and visualize Photosynth point clouds and orientation camera parameters. All of them are developed in VVVV programming environment. 3DStudio Max environment is then chosen because of its performance in terms of interactive modeling, UV mapping parameters handling and real time visualization of projected texture on the model surface. Experimental results show how is possible to obtain a 3D photorealistic model using the scale of the spherical photogrammetry restitution to orient web provided point clouds. Moreover the proposed research highlights how is possible to speed up the model reconstruction without losing metric and photometric accuracy. In the same time, using the same panorama dataset, it picks out a useful chance to compare the orientations coming from the two mentioned technologies (Spherical Photogrammetry and Structure for Motion)

    Image Based Modeling from Spherical Photogrammetry and Structure for Motion. The Case of the Treasury, Nabatean Architecture in Petra

    Get PDF
    This research deals with an efficient and low cost methodology to obtain a metric and photorealstic survey of a complex architecture. Photomodeling is an already tested interactive approach to produce a detailed and quick 3D model reconstruction. Photomodeling goes along with the creation of a rough surface over which oriented images can be back-projected in real time. Lastly the model can be enhanced checking the coincidence between the surface and the projected texture. The challenge of this research is to combine the advantages of two technologies already set up and used in many projects: spherical photogrammetry (Fangi, 2007,2008,2009,2010) and structure for motion (Photosynth web service and Bundler + CMVS2 + PMVS2). The input images are taken from the same points of view to form the set of panoramic photos paying attention to use well-suited projections: equirectangular for spherical photogrammetry and rectilinear for Photosynth web service. The performance of the spherical photogrammetry is already known in terms of its metric accuracy and acquisition quickness but time is required in the restitution step because of the manual homologous point recognition from different panoramas. In Photosynth instead the restitution is quick and automated: the provided point clouds are useful benchmarks to start with the model reconstruction even if lacking in details and scale. The proposed workflow needs of ad-hoc tools to capture high resolution rectilinear panoramic images and visualize Photosynth point clouds and orientation camera parameters. All of them are developed in VVVV programming environment. 3DStudio Max environment is then chosen because of its performance in terms of interactive modeling, UV mapping parameters handling and real time visualization of projected texture on the model surface. Experimental results show how is possible to obtain a 3D photorealistic model using the scale of the spherical photogrammetry restitution to orient web provided point clouds. Moreover the proposed research highlights how is possible to speed up the model reconstruction without losing metric and photometric accuracy. In the same time, using the same panorama dataset, it picks out a useful chance to compare the orientations coming from the two mentioned technologies (Spherical Photogrammetry and Structure for Motion)

    Photometric Depth Super-Resolution

    Full text link
    This study explores the use of photometric techniques (shape-from-shading and uncalibrated photometric stereo) for upsampling the low-resolution depth map from an RGB-D sensor to the higher resolution of the companion RGB image. A single-shot variational approach is first put forward, which is effective as long as the target's reflectance is piecewise-constant. It is then shown that this dependency upon a specific reflectance model can be relaxed by focusing on a specific class of objects (e.g., faces), and delegate reflectance estimation to a deep neural network. A multi-shot strategy based on randomly varying lighting conditions is eventually discussed. It requires no training or prior on the reflectance, yet this comes at the price of a dedicated acquisition setup. Both quantitative and qualitative evaluations illustrate the effectiveness of the proposed methods on synthetic and real-world scenarios.Comment: IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 2019. First three authors contribute equall

    A Novel Framework for Highlight Reflectance Transformation Imaging

    Get PDF
    We propose a novel pipeline and related software tools for processing the multi-light image collections (MLICs) acquired in different application contexts to obtain shape and appearance information of captured surfaces, as well as to derive compact relightable representations of them. Our pipeline extends the popular Highlight Reflectance Transformation Imaging (H-RTI) framework, which is widely used in the Cultural Heritage domain. We support, in particular, perspective camera modeling, per-pixel interpolated light direction estimation, as well as light normalization correcting vignetting and uneven non-directional illumination. Furthermore, we propose two novel easy-to-use software tools to simplify all processing steps. The tools, in addition to support easy processing and encoding of pixel data, implement a variety of visualizations, as well as multiple reflectance-model-fitting options. Experimental tests on synthetic and real-world MLICs demonstrate the usefulness of the novel algorithmic framework and the potential benefits of the proposed tools for end-user applications.Terms: "European Union (EU)" & "Horizon 2020" / Action: H2020-EU.3.6.3. - Reflective societies - cultural heritage and European identity / Acronym: Scan4Reco / Grant number: 665091DSURF project (PRIN 2015) funded by the Italian Ministry of University and ResearchSardinian Regional Authorities under projects VIGEC and Vis&VideoLa

    Advances in Simultaneous Localization and Mapping in Confined Underwater Environments Using Sonar and Optical Imaging.

    Full text link
    This thesis reports on the incorporation of surface information into a probabilistic simultaneous localization and mapping (SLAM) framework used on an autonomous underwater vehicle (AUV) designed for underwater inspection. AUVs operating in cluttered underwater environments, such as ship hulls or dams, are commonly equipped with Doppler-based sensors, which---in addition to navigation---provide a sparse representation of the environment in the form of a three-dimensional (3D) point cloud. The goal of this thesis is to develop perceptual algorithms that take full advantage of these sparse observations for correcting navigational drift and building a model of the environment. In particular, we focus on three objectives. First, we introduce a novel representation of this 3D point cloud as collections of planar features arranged in a factor graph. This factor graph representation probabalistically infers the spatial arrangement of each planar segment and can effectively model smooth surfaces (such as a ship hull). Second, we show how this technique can produce 3D models that serve as input to our pipeline that produces the first-ever 3D photomosaics using a two-dimensional (2D) imaging sonar. Finally, we propose a model-assisted bundle adjustment (BA) framework that allows for robust registration between surfaces observed from a Doppler sensor and visual features detected from optical images. Throughout this thesis, we show methods that produce 3D photomosaics using a combination of triangular meshes (derived from our SLAM framework or given a-priori), optical images, and sonar images. Overall, the contributions of this thesis greatly increase the accuracy, reliability, and utility of in-water ship hull inspection with AUVs despite the challenges they face in underwater environments. We provide results using the Hovering Autonomous Underwater Vehicle (HAUV) for autonomous ship hull inspection, which serves as the primary testbed for the algorithms presented in this thesis. The sensor payload of the HAUV consists primarily of: a Doppler velocity log (DVL) for underwater navigation and ranging, monocular and stereo cameras, and---for some applications---an imaging sonar.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120750/1/paulozog_1.pd
    corecore