58 research outputs found

    Advances in 3D reconstruction

    Get PDF
    La tesi affronta il problema della ricostruzione di scene tridimensionali a partire da insiemi non strutturati di fotografie delle stesse. Lo stato dell'arte viene avanzato su diversi fronti: il primo contributo consiste in una formulazione robusta del problema di struttura e moto basata su di un approccio gerarchico, contrariamente a quello sequenziale prevalente in letteratura. Questa metodologia abbatte di un ordine di grandezza il costo computazionale complessivo, risulta inerentemente parallelizzabile, minimizza il progressivo accumulo degli errori e elimina la cruciale dipendenza dalla scelta della coppia di viste iniziale comune a tutte le formulazioni concorrenti. Un secondo contributo consiste nello sviluppo di una nuova procedura di autocalibrazione, particolarmente robusta e adatta al contesto del problema di moto e struttura. La soluzione proposta consiste in una procedura in forma chiusa per il recupero del piano all'infinito data una stima dei parametri intrinseci di almeno due camere. Questo metodo viene utilizzato per la ricerca esaustiva dei parametri interni, il cui spazio di ricerca Š strutturalmente limitato dalla finitezza dei dispositivi di acquisizione. Si Š indagato infine come visualizzare in maniera efficiente e gradevole i risultati di ricostruzione ottenuti: a tale scopo sono stati sviluppati algoritmi per il calcolo della disparit… stereo e procedure per la visualizzazione delle ricostruzione come insiemi di piani tessiturati automaticamente estratti, ottenendo una rappresentazione fedele, compatta e semanticamente significativa. Ogni risultato Š stato corredato da una validazione sperimentale rigorosa, con verifiche sia qualitative che quantitative.The thesis tackles the problem of 3D reconstruction of scenes from unstructured picture datasets. State of the art is advanced on several aspects: the first contribute consists in a robust formulation of the structure and motion problem based on a hierarchical approach, as opposed to the sequential one prevalent in literature. This methodology reduces the total computational complexity by one order of magnitude, is inherently parallelizable, minimizes the error accumulation causing drift and eliminates the crucial dependency from the choice of the initial couple of views which is common to all competing approaches. A second contribute consists in the discovery of a novel slef-calibration procedure, very robust and tailored to the structure and motion task. The proposed solution is a closed-form procedure for the recovery of the plane at infinity given a rough estimate of focal parameters of at least two cameras. This method is employed for the exaustive search of internal parameters, whise space is inherently bounded from the finiteness of acquisition devices. Finally, we inevstigated how to visualize in a efficient and compelling way the obtained reconstruction results: to this effect several algorithms for the computation of stereo disparity are presented. Along with procedures for the automatic extraction of support planes, they have been employed to obtain a faithful, compact and semantically significant representation of the scene as a collection of textured planes, eventually augmented by depth information encoded in relief maps. Every result has been verified by a rigorous experimental validation, comprising both qualitative and quantitative comparisons

    Statistical Models and Optimization Algorithms for High-Dimensional Computer Vision Problems

    Get PDF
    Data-driven and computational approaches are showing significant promise in solving several challenging problems in various fields such as bioinformatics, finance and many branches of engineering. In this dissertation, we explore the potential of these approaches, specifically statistical data models and optimization algorithms, for solving several challenging problems in computer vision. In doing so, we contribute to the literatures of both statistical data models and computer vision. In the context of statistical data models, we propose principled approaches for solving robust regression problems, both linear and kernel, and missing data matrix factorization problem. In computer vision, we propose statistically optimal and efficient algorithms for solving the remote face recognition and structure from motion (SfM) problems. The goal of robust regression is to estimate the functional relation between two variables from a given data set which might be contaminated with outliers. Under the reasonable assumption that there are fewer outliers than inliers in a data set, we formulate the robust linear regression problem as a sparse learning problem, which can be solved using efficient polynomial-time algorithms. We also provide sufficient conditions under which the proposed algorithms correctly solve the robust regression problem. We then extend our robust formulation to the case of kernel regression, specifically to propose a robust version for relevance vector machine (RVM) regression. Matrix factorization is used for finding a low-dimensional representation for data embedded in a high-dimensional space. Singular value decomposition is the standard algorithm for solving this problem. However, when the matrix has many missing elements this is a hard problem to solve. We formulate the missing data matrix factorization problem as a low-rank semidefinite programming problem (essentially a rank constrained SDP), which allows us to find accurate and efficient solutions for large-scale factorization problems. Face recognition from remotely acquired images is a challenging problem because of variations due to blur and illumination. Using the convolution model for blur, we show that the set of all images obtained by blurring a given image forms a convex set. We then use convex optimization techniques to find the distances between a given blurred (probe) image and the gallery images to find the best match. Further, using a low-dimensional linear subspace model for illumination variations, we extend our theory in a similar fashion to recognize blurred and poorly illuminated faces. Bundle adjustment is the final optimization step of the SfM problem where the goal is to obtain the 3-D structure of the observed scene and the camera parameters from multiple images of the scene. The traditional bundle adjustment algorithm, based on minimizing the l_2 norm of the image re-projection error, has cubic complexity in the number of unknowns. We propose an algorithm, based on minimizing the l_infinity norm of the re-projection error, that has quadratic complexity in the number of unknowns. This is achieved by reducing the large-scale optimization problem into many small scale sub-problems each of which can be solved using second-order cone programming

    Uncertainty Minimization in Robotic 3D Mapping Systems Operating in Dynamic Large-Scale Environments

    Get PDF
    This dissertation research is motivated by the potential and promise of 3D sensing technologies in safety and security applications. With specific focus on unmanned robotic mapping to aid clean-up of hazardous environments, under-vehicle inspection, automatic runway/pavement inspection and modeling of urban environments, we develop modular, multi-sensor, multi-modality robotic 3D imaging prototypes using localization/navigation hardware, laser range scanners and video cameras. While deploying our multi-modality complementary approach to pose and structure recovery in dynamic real-world operating conditions, we observe several data fusion issues that state-of-the-art methodologies are not able to handle. Different bounds on the noise model of heterogeneous sensors, the dynamism of the operating conditions and the interaction of the sensing mechanisms with the environment introduce situations where sensors can intermittently degenerate to accuracy levels lower than their design specification. This observation necessitates the derivation of methods to integrate multi-sensor data considering sensor conflict, performance degradation and potential failure during operation. Our work in this dissertation contributes the derivation of a fault-diagnosis framework inspired by information complexity theory to the data fusion literature. We implement the framework as opportunistic sensing intelligence that is able to evolve a belief policy on the sensors within the multi-agent 3D mapping systems to survive and counter concerns of failure in challenging operating conditions. The implementation of the information-theoretic framework, in addition to eliminating failed/non-functional sensors and avoiding catastrophic fusion, is able to minimize uncertainty during autonomous operation by adaptively deciding to fuse or choose believable sensors. We demonstrate our framework through experiments in multi-sensor robot state localization in large scale dynamic environments and vision-based 3D inference. Our modular hardware and software design of robotic imaging prototypes along with the opportunistic sensing intelligence provides significant improvements towards autonomous accurate photo-realistic 3D mapping and remote visualization of scenes for the motivating applications

    Image restoration from noisy and limited measurements with applications in 3D imaging

    Get PDF
    The recovery of image data from noisy and limited measurements is an important problem in image processing with many practical applications. Despite great improvements in imaging devices over the past few years, the need for a fast and robust recovery method is still essential, especially in fields such as medical imaging or remote sensing. These methods are also important for new imaging modalities where the quality of data is still limited due to current state of technology. This thesis investigates novel methods to recover signals and images from noisy or sparse measurements, in new imaging modalities, for practical 3D imaging applications. In particular, the following problems are considered. First, the Tree-based Orthogonal Matching Pursuit (TOMP) algorithm is proposed to recover sparse signals with tree structure. This is an improvement over the Orthogonal Matching Pursuit method with the incorporation of the sparse-tree prior on the data. A theoretical condition on the recovery performance as well as a detailed complexity analysis is derived. Extensive experiments are carried out to compare the proposed method with other state-of-the-art algorithms. Second, a new point clouds registration method is investigated and applied for 3D model reconstruction with a depth camera, which is a recently introduced device with many potential applications in 3D imaging and human-machine interaction. Currently, the depth camera is limited in resolution and suffers from complex types of noise. In the proposed method, the Implicit Moving Least Squares (IMLS) method is employed to derive a more robust registration method which can deal with noisy point clouds. Given a good registration, information from multiple depth images can be integrated together to help reduce the effects of noise and possibly increase the resolution. This method is essential to bring commodity depth cameras to new applications that demand accurate depth information. Third, a hybrid system which consists of a light-field camera and a depth camera rigidly attached together is proposed. The system can be applied for digital refocusing on an arbitrary surface and for recovering complex reflectance information of a surface. The light-field camera is a device that can sample the 4D spatio-angular light field and allows one to refocus the captured image digitally. Given light-field information, it is possible to rearrange the light rays appropriately to render novel views or to generate refocused photographs. In theory, it is possible to estimate the depth map from a light field. However, there is a trade-off between angular and spatial resolution in current designs of light-field cameras, which leads to low quality and resolution of the estimated depth map. Moreover, for advanced 3D imaging applications, it is important to have good quality geometric and radiometric information. Thus, a depth camera is attached to the light-field camera to achieve this goal. The calibration of the system is presented in detail. The proposed system is demonstrated to create a refocused image on an arbitrary surface. However, we believe that the proposed system has great potential in more advanced imaging applications
    • …
    corecore