311 research outputs found

    Geometric calibration of Colour and Stereo Surface Imaging System of ESA's Trace Gas Orbiter

    Get PDF
    There are many geometric calibration methods for "standard" cameras. These methods, however, cannot be used for the calibration of telescopes with large focal lengths and complex off-axis optics. Moreover, specialized calibration methods for the telescopes are scarce in literature. We describe the calibration method that we developed for the Colour and Stereo Surface Imaging System (CaSSIS) telescope, on board of the ExoMars Trace Gas Orbiter (TGO). Although our method is described in the context of CaSSIS, with camera-specific experiments, it is general and can be applied to other telescopes. We further encourage re-use of the proposed method by making our calibration code and data available on-line.Comment: Submitted to Advances in Space Researc

    Ear-to-ear Capture of Facial Intrinsics

    Get PDF
    We present a practical approach to capturing ear-to-ear face models comprising both 3D meshes and intrinsic textures (i.e. diffuse and specular albedo). Our approach is a hybrid of geometric and photometric methods and requires no geometric calibration. Photometric measurements made in a lightstage are used to estimate view dependent high resolution normal maps. We overcome the problem of having a single photometric viewpoint by capturing in multiple poses. We use uncalibrated multiview stereo to estimate a coarse base mesh to which the photometric views are registered. We propose a novel approach to robustly stitching surface normal and intrinsic texture data into a seamless, complete and highly detailed face model. The resulting relightable models provide photorealistic renderings in any view

    TwinTex: Geometry-aware Texture Generation for Abstracted 3D Architectural Models

    Full text link
    Coarse architectural models are often generated at scales ranging from individual buildings to scenes for downstream applications such as Digital Twin City, Metaverse, LODs, etc. Such piece-wise planar models can be abstracted as twins from 3D dense reconstructions. However, these models typically lack realistic texture relative to the real building or scene, making them unsuitable for vivid display or direct reference. In this paper, we present TwinTex, the first automatic texture mapping framework to generate a photo-realistic texture for a piece-wise planar proxy. Our method addresses most challenges occurring in such twin texture generation. Specifically, for each primitive plane, we first select a small set of photos with greedy heuristics considering photometric quality, perspective quality and facade texture completeness. Then, different levels of line features (LoLs) are extracted from the set of selected photos to generate guidance for later steps. With LoLs, we employ optimization algorithms to align texture with geometry from local to global. Finally, we fine-tune a diffusion model with a multi-mask initialization component and a new dataset to inpaint the missing region. Experimental results on many buildings, indoor scenes and man-made objects of varying complexity demonstrate the generalization ability of our algorithm. Our approach surpasses state-of-the-art texture mapping methods in terms of high-fidelity quality and reaches a human-expert production level with much less effort. Project page: https://vcc.tech/research/2023/TwinTex.Comment: Accepted to SIGGRAPH ASIA 202

    Accurate, fast, and robust 3D city-scale reconstruction using wide area motion imagery

    Get PDF
    Multi-view stereopsis (MVS) is a core problem in computer vision, which takes a set of scene views together with known camera poses, then produces a geometric representation of the underlying 3D model Using 3D reconstruction one can determine any object's 3D profile, as well as knowing the 3D coordinate of any point on the profile. The 3D reconstruction of objects is a generally scientific problem and core technology of a wide variety of fields, such as Computer Aided Geometric Design (CAGD), computer graphics, computer animation, computer vision, medical imaging, computational science, virtual reality, digital media, etc. However, though MVS problems have been studied for decades, many challenges still exist in current state-of-the-art algorithms, for example, many algorithms still lack accuracy and completeness when tested on city-scale large datasets, most MVS algorithms available require a large amount of execution time and/or specialized hardware and software, which results in high cost, and etc... This dissertation work tries to address all the challenges we mentioned, and proposed multiple solutions. More specifically, this dissertation work proposed multiple novel MVS algorithms to automatically and accurately reconstruct the underlying 3D scenes. By proposing a novel volumetric voxel-based method, one of our algorithms achieved near real-time runtime speed, which does not require any special hardware or software, and can be deployed onto power-constrained embedded systems. By developing a new camera clustering module and a novel weighted voting-based surface likelihood estimation module, our algorithm is generalized to process di erent datasets, and achieved the best performance in terms of accuracy and completeness when compared with existing algorithms. This dissertation work also performs the very first quantitative evaluation in terms of precision, recall, and F-score using real-world LiDAR groundtruth data. Last but not least, this dissertation work proposes an automatic workflow, which can stitch multiple point cloud models with limited overlapping areas into one larger 3D model for better geographical coverage. All the results presented in this dissertation work have been evaluated in our wide area motion imagery (WAMI) dataset, and improved the state-of-the-art performances by a large margin.The generated results from this dissertation work have been successfully used in many aspects, including: city digitization, improving detection and tracking performances, real time dynamic shadow detection, 3D change detection, visibility map generating, VR environment, and visualization combined with other information, such as building footprint and roads.Includes bibliographical references

    An in Depth Review Paper on Numerous Image Mosaicing Approaches and Techniques

    Get PDF
    Image mosaicing is one of the most important subjects of research in computer vision at current. Image mocaicing requires the integration of direct techniques and feature based techniques. Direct techniques are found to be very useful for mosaicing large overlapping regions, small translations and rotations while feature based techniques are useful for small overlapping regions. Feature based image mosaicing is a combination of corner detection, corner matching, motion parameters estimation and image stitching.Furthermore, image mosaicing is considered the process of obtaining a wider field-of-view of a scene from a sequence of partial views, which has been an attractive research area because of its wide range of applications, including motion detection, resolution enhancement, monitoring global land usage, and medical imaging. Numerous algorithms for image mosaicing have been proposed over the last two decades.In this paper the authors present a review on different approaches for image mosaicing and the literature over the past few years in the field of image masaicing methodologies. The authors take an overview on the various methods for image mosaicing.This review paper also provides an in depth survey of the existing image mosaicing algorithms by classifying them into several groups. For each group, the fundamental concepts are first clearly explained. Finally this paper also reviews and discusses the strength and weaknesses of all the mosaicing groups

    IMAGE BASED MODELING FROM SPHERICAL PHOTOGRAMMETRY AND STRUCTURE FOR MOTION. THE CASE OF THE TREASURY, NABATEAN ARCHITECTURE IN PETRA

    Get PDF
    This research deals with an efficient and low cost methodology to obtain a metric and photorealstic survey of a complex architecture. Photomodeling is an already tested interactive approach to produce a detailed and quick 3D model reconstruction. Photomodeling goes along with the creation of a rough surface over which oriented images can be back-projected in real time. Lastly the model can be enhanced checking the coincidence between the surface and the projected texture. The challenge of this research is to combine the advantages of two technologies already set up and used in many projects: spherical photogrammetry (Fangi, 2007,2008,2009,2010) and structure for motion (Photosynth web service and Bundler + CMVS2 + PMVS2). The input images are taken from the same points of view to form the set of panoramic photos paying attention to use well-suited projections: equirectangular for spherical photogrammetry and rectilinear for Photosynth web service. The performance of the spherical photogrammetry is already known in terms of its metric accuracy and acquisition quickness but time is required in the restitution step because of the manual homologous point recognition from different panoramas. In Photosynth instead the restitution is quick and automated: the provided point clouds are useful benchmarks to start with the model reconstruction even if lacking in details and scale. The proposed workflow needs of ad-hoc tools to capture high resolution rectilinear panoramic images and visualize Photosynth point clouds and orientation camera parameters. All of them are developed in VVVV programming environment. 3DStudio Max environment is then chosen because of its performance in terms of interactive modeling, UV mapping parameters handling and real time visualization of projected texture on the model surface. Experimental results show how is possible to obtain a 3D photorealistic model using the scale of the spherical photogrammetry restitution to orient web provided point clouds. Moreover the proposed research highlights how is possible to speed up the model reconstruction without losing metric and photometric accuracy. In the same time, using the same panorama dataset, it picks out a useful chance to compare the orientations coming from the two mentioned technologies (Spherical Photogrammetry and Structure for Motion)
    corecore