197 research outputs found

    Acceleration Techniques for Photo Realistic Computer Generated Integral Images

    Get PDF
    The research work presented in this thesis has approached the task of accelerating the generation of photo-realistic integral images produced by integral ray tracing. Ray tracing algorithm is a computationally exhaustive algorithm, which spawns one ray or more through each pixel of the pixels forming the image, into the space containing the scene. Ray tracing integral images consumes more processing time than normal images. The unique characteristics of the 3D integral camera model has been analysed and it has been shown that different coherency aspects than normal ray tracing can be investigated in order to accelerate the generation of photo-realistic integral images. The image-space coherence has been analysed describing the relation between rays and projected shadows in the scene rendered. Shadow cache algorithm has been adapted in order to minimise shadow intersection tests in integral ray tracing. Shadow intersection tests make the majority of the intersection tests in ray tracing. Novel pixel-tracing styles are developed uniquely for integral ray tracing to improve the image-space coherence and the performance of the shadow cache algorithm. Acceleration of the photo-realistic integral images generation using the image-space coherence information between shadows and rays in integral ray tracing has been achieved with up to 41 % of time saving. Also, it has been proven that applying the new styles of pixel-tracing does not affect of the scalability of integral ray tracing running over parallel computers. The novel integral reprojection algorithm has been developed uniquely through geometrical analysis of the generation of integral image in order to use the tempo-spatial coherence information within the integral frames. A new derivation of integral projection matrix for projecting points through an axial model of a lenticular lens has been established. Rapid generation of 3D photo-realistic integral frames has been achieved with a speed four times faster than the normal generation

    Refractive Geometry for Underwater Domes

    Get PDF
    Underwater cameras are typically placed behind glass windows to protect them from the water. Spherical glass, a dome port, is well suited for high water pressures at great depth, allows for a large field of view, and avoids refraction if a pinhole camera is positioned exactly at the sphere’s center. Adjusting a real lens perfectly to the dome center is a challenging task, both in terms of how to actually guide the centering process (e.g. visual servoing) and how to measure the alignment quality, but also, how to mechanically perform the alignment. Consequently, such systems are prone to being decentered by some offset, leading to challenging refraction patterns at the sphere that invalidate the pinhole camera model. We show that the overall camera system becomes an axial camera, even for thick domes as used for deep sea exploration and provide a non-iterative way to compute the center of refraction without requiring knowledge of exact air, glass or water properties. We also analyze the refractive geometry at the sphere, looking at effects such as forward- vs. backward decentering, iso-refraction curves and obtain a 6th-degree polynomial equation for forward projection of 3D points in thin domes. We then propose a pure underwater calibration procedure to estimate the decentering from multiple images. This estimate can either be used during adjustment to guide the mechanical position of the lens, or can be considered in photogrammetric underwater applications

    A Study towards Real Time Camera Calibration

    Get PDF
    Preliminary Report Prepared for the Project VISTEOThis report provides a detailed study of the problem of real time camera calibration. This analysis, based on the study of literature in the area, as well as the experiments carried out on real and synthetic data, is motivated by the requirements of the VISTEO project. VISTEO deals with a fusion of real images and synthetic environments, objects etc in TV video sequences. It thus deals with a challenging and fast growing area in virtual reality research - Augmented reality (AR). AR generates a composite view of the real scene viewed by the user and a virtual scene generated by the computer which augments the real scene with additional information

    Refractive Geometry for Underwater Domes

    Get PDF
    Underwater cameras are typically placed behind glass windows to protect them from the water. Spherical glass, a dome port, is well suited for high water pressures at great depth, allows for a large field of view, and avoids refraction if a pinhole camera is positioned exactly at the sphere’s center. Adjusting a real lens perfectly to the dome center is a challenging task, both in terms of how to actually guide the centering process (e.g. visual servoing) and how to measure the alignment quality, but also, how to mechanically perform the alignment. Consequently, such systems are prone to being decentered by some offset, leading to challenging refraction patterns at the sphere that invalidate the pinhole camera model. We show that the overall camera system becomes an axial camera, even for thick domes as used for deep sea exploration and provide a non-iterative way to compute the center of refraction without requiring knowledge of exact air, glass or water properties. We also analyze the refractive geometry at the sphere, looking at effects such as forward- vs. backward decentering, iso-refraction curves and obtain a 6th-degree polynomial equation for forward projection of 3D points in thin domes. We then propose a pure underwater calibration procedure to estimate the decentering from multiple images. This estimate can either be used during adjustment to guide the mechanical position of the lens, or can be considered in photogrammetric underwater applications

    Applications in Monocular Computer Vision using Geometry and Learning : Map Merging, 3D Reconstruction and Detection of Geometric Primitives

    Get PDF
    As the dream of autonomous vehicles moving around in our world comes closer, the problem of robust localization and mapping is essential to solve. In this inherently structured and geometric problem we also want the agents to learn from experience in a data driven fashion. How the modern Neural Network models can be combined with Structure from Motion (SfM) is an interesting research question and this thesis studies some related problems in 3D reconstruction, feature detection, SfM and map merging.In Paper I we study how a Bayesian Neural Network (BNN) performs in Semantic Scene Completion, where the task is to predict a semantic 3D voxel grid for the Field of View of a single RGBD image. We propose an extended task and evaluate the benefits of the BNN when encountering new classes at inference time. It is shown that the BNN outperforms the deterministic baseline.Papers II-­III are about detection of points, lines and planes defining a Room Layout in an RGB image. Due to the repeated textures and homogeneous colours of indoor surfaces it is not ideal to only use point features for Structure from Motion. The idea is to complement the point features by detecting a Wireframe – a connected set of line segments – which marks the intersection of planes in the Room Layout. Paper II concerns a task for detecting a Semantic Room Wireframe and implements a Neural Network model utilizing a Graph Convolutional Network module. The experiments show that the method is more flexible than previous Room Layout Estimation methods and perform better than previous Wireframe Parsing methods. Paper III takes the task closer to Room Layout Estimation by detecting a connected set of semantic polygons in an RGB image. The end­-to-­end trainable model is a combination of a Wireframe Parsing model and a Heterogeneous Graph Neural Network. We show promising results by outperforming state of the art models for Room Layout Estimation using synthetic Wireframe detections. However, the joint Wireframe and Polygon detector requires further research to compete with the state of the art models.In Paper IV we propose minimal solvers for SfM with parallel cylinders. The problem may be reduced to estimating circles in 2D and the paper contributes with theory for the two­view relative motion and two­-circle relative structure problem. Fast solvers are derived and experiments show good performance in both simulation and on real data.Papers V-­VII cover the task of map merging. That is, given a set of individually optimized point clouds with camera poses from a SfM pipeline, how can the solutions be effectively merged without completely re­solving the Structure from Motion problem? Papers V­-VI introduce an effective method for merging and shows the effectiveness through experiments of real and simulated data. Paper VII considers the matching problem for point clouds and proposes minimal solvers that allows for deformation ofeach point cloud. Experiments show that the method robustly matches point clouds with drift in the SfM solution

    Catadioptric stereo-vision system using a spherical mirror

    Get PDF
    Abstract In the computer vision field, the reconstruction of target surfaces is usually achieved by using 3D optical scanners assembled integrating digital cameras and light emitters. However, these solutions are limited by the low field of view, which requires multiple acquisition from different views to reconstruct complex free-form geometries. The combination of mirrors and lenses (catadioptric systems) can be adopted to overcome this issue. In this work, a stereo catadioptric optical scanner has been developed by assembling two digital cameras, a spherical mirror and a multimedia white light projector. The adopted configuration defines a non-single viewpoint system, thus a non-central catadioptric camera model has been developed. An analytical solution to compute the projection of a scene point onto the image plane (forward projection) and vice-versa (backward projection) is presented. The proposed optical setup allows omnidirectional stereo vision thus allowing the reconstruction of target surfaces with a single acquisition. Preliminary results, obtained measuring a hollow specimen, demonstrated the effectiveness of the described approach
    • …
    corecore