72 research outputs found

    Plenoptic Signal Processing for Robust Vision in Field Robotics

    Get PDF
    This thesis proposes the use of plenoptic cameras for improving the robustness and simplicity of machine vision in field robotics applications. Dust, rain, fog, snow, murky water and insufficient light can cause even the most sophisticated vision systems to fail. Plenoptic cameras offer an appealing alternative to conventional imagery by gathering significantly more light over a wider depth of field, and capturing a rich 4D light field structure that encodes textural and geometric information. The key contributions of this work lie in exploring the properties of plenoptic signals and developing algorithms for exploiting them. It lays the groundwork for the deployment of plenoptic cameras in field robotics by establishing a decoding, calibration and rectification scheme appropriate to compact, lenslet-based devices. Next, the frequency-domain shape of plenoptic signals is elaborated and exploited by constructing a filter which focuses over a wide depth of field rather than at a single depth. This filter is shown to reject noise, improving contrast in low light and through attenuating media, while mitigating occluders such as snow, rain and underwater particulate matter. Next, a closed-form generalization of optical flow is presented which directly estimates camera motion from first-order derivatives. An elegant adaptation of this "plenoptic flow" to lenslet-based imagery is demonstrated, as well as a simple, additive method for rendering novel views. Finally, the isolation of dynamic elements from a static background is considered, a task complicated by the non-uniform apparent motion caused by a mobile camera. Two elegant closed-form solutions are presented dealing with monocular time-series and light field image pairs. This work emphasizes non-iterative, noise-tolerant, closed-form, linear methods with predictable and constant runtimes, making them suitable for real-time embedded implementation in field robotics applications

    Plenoptic Signal Processing for Robust Vision in Field Robotics

    Get PDF
    This thesis proposes the use of plenoptic cameras for improving the robustness and simplicity of machine vision in field robotics applications. Dust, rain, fog, snow, murky water and insufficient light can cause even the most sophisticated vision systems to fail. Plenoptic cameras offer an appealing alternative to conventional imagery by gathering significantly more light over a wider depth of field, and capturing a rich 4D light field structure that encodes textural and geometric information. The key contributions of this work lie in exploring the properties of plenoptic signals and developing algorithms for exploiting them. It lays the groundwork for the deployment of plenoptic cameras in field robotics by establishing a decoding, calibration and rectification scheme appropriate to compact, lenslet-based devices. Next, the frequency-domain shape of plenoptic signals is elaborated and exploited by constructing a filter which focuses over a wide depth of field rather than at a single depth. This filter is shown to reject noise, improving contrast in low light and through attenuating media, while mitigating occluders such as snow, rain and underwater particulate matter. Next, a closed-form generalization of optical flow is presented which directly estimates camera motion from first-order derivatives. An elegant adaptation of this "plenoptic flow" to lenslet-based imagery is demonstrated, as well as a simple, additive method for rendering novel views. Finally, the isolation of dynamic elements from a static background is considered, a task complicated by the non-uniform apparent motion caused by a mobile camera. Two elegant closed-form solutions are presented dealing with monocular time-series and light field image pairs. This work emphasizes non-iterative, noise-tolerant, closed-form, linear methods with predictable and constant runtimes, making them suitable for real-time embedded implementation in field robotics applications

    Gait recognition in the wild using shadow silhouettes

    Get PDF
    Gait recognition systems allow identification of users relying on features acquired from their body movement while walking. This paper discusses the main factors affecting the gait features that can be acquired from a 2D video sequence, proposing a taxonomy to classify them across four dimensions. It also explores the possibility of obtaining users’ gait features from the shadow silhouettes by proposing a novel gait recognition system. The system includes novel methods for: (i) shadow segmentation, (ii) walking direction identification, and (iii) shadow silhouette rectification. The shadow segmentation is performed by fitting a line through the feet positions of the user obtained from the gait texture image (GTI). The direction of the fitted line is then used to identify the walking direction of the user. Finally, the shadow silhouettes thus obtained are rectified to compensate for the distortions and deformations resulting from the acquisition setup, using the proposed four-point correspondence method. The paper additionally presents a new database, consisting of 21 users moving along two walking directions, to test the proposed gait recognition system. Results show that the performance of the proposed system is equivalent to that of the state-of-the-art in a constrained setting, but performing equivalently well in the wild, where most state-of-the-art methods fail. The results also highlight the advantages of using rectified shadow silhouettes over body silhouettes under certain conditions.info:eu-repo/semantics/acceptedVersio

    Quintessence in the Weyl-Gauss-Bonnet Model

    Full text link
    Quintessence models have been widely examined in the context of scalar-Gauss-Bonnet gravity, a subclass of Horndeski's theory, and were proposed as viable candidates for Dark Energy. However, the relatively recent observational constraints on the speed of gravitational waves cGWc_{\textrm{GW}} have resulted in many of those models being ruled out because they predict cGW≠cc_{\textrm{GW}} \neq c generally. While these were formulated in the metric formalism of gravity, it was found later that some Horndeski models could be rescued in the Palatini formalism, where the connection is independent of the metric and the underlying geometry no longer corresponds to the pseudo-Riemannian one. Motivated by this and the relation between scalar-Gauss-Bonnet gravity and Horndeski's theory, we put forward a new quintessence model with the scalar-Gauss-Bonnet action but in Weyl geometry. We find the fixed points of the dynamical system under some assumptions and determine their stability via linear analysis. Although the past evolution of the Universe as we know it is correctly reproduced, the constraints on cGWc_{\textrm{GW}} are shown to be grossly violated for the coupling function under consideration. The case of cGW=cc_{\textrm{GW}} = c is regarded also, but no evolution consistent with other cosmological observations is obtained.Comment: 36 pages, 2 figure

    Plenoptische Modellierung und Darstellung komplexer starrer Szenen

    Get PDF
    Image-Based Rendering is the task of generating novel views from existing images. In this thesis different new methods to solve this problem are presented. These methods are designed to fulfil special goals such as scalability and interactive rendering performance. First, the theory of the Plenoptic Function is introduced as the mathematical foundation of image formation. Then a new taxonomy is introduced to categorise existing methods and an extensive overview of known approaches is given. This is followed by a detailed analysis of the design goals and the requirements with regards to input data. It is concluded that for perspectively correct image generation from sparse spatial sampling geometry information about the scene is necessary. This leads to the design of three different Image-Based Rendering methods. The rendering results are analysed on different data sets. For this analysis, error metrics are defined to evaluate different aspects

    Image-Based Rendering Of Real Environments For Virtual Reality

    Get PDF

    Reconstruction of the surface of the Sun from stereoscopic images

    Full text link
    Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal

    Multiple View Geometry For Video Analysis And Post-production

    Get PDF
    Multiple view geometry is the foundation of an important class of computer vision techniques for simultaneous recovery of camera motion and scene structure from a set of images. There are numerous important applications in this area. Examples include video post-production, scene reconstruction, registration, surveillance, tracking, and segmentation. In video post-production, which is the topic being addressed in this dissertation, computer analysis of the motion of the camera can replace the currently used manual methods for correctly aligning an artificially inserted object in a scene. However, existing single view methods typically require multiple vanishing points, and therefore would fail when only one vanishing point is available. In addition, current multiple view techniques, making use of either epipolar geometry or trifocal tensor, do not exploit fully the properties of constant or known camera motion. Finally, there does not exist a general solution to the problem of synchronization of N video sequences of distinct general scenes captured by cameras undergoing similar ego-motions, which is the necessary step for video post-production among different input videos. This dissertation proposes several advancements that overcome these limitations. These advancements are used to develop an efficient framework for video analysis and post-production in multiple cameras. In the first part of the dissertation, the novel inter-image constraints are introduced that are particularly useful for scenes where minimal information is available. This result extends the current state-of-the-art in single view geometry techniques to situations where only one vanishing point is available. The property of constant or known camera motion is also described in this dissertation for applications such as calibration of a network of cameras in video surveillance systems, and Euclidean reconstruction from turn-table image sequences in the presence of zoom and focus. We then propose a new framework for the estimation and alignment of camera motions, including both simple (panning, tracking and zooming) and complex (e.g. hand-held) camera motions. Accuracy of these results is demonstrated by applying our approach to video post-production applications such as video cut-and-paste and shadow synthesis. As realistic image-based rendering problems, these applications require extreme accuracy in the estimation of camera geometry, the position and the orientation of the light source, and the photometric properties of the resulting cast shadows. In each case, the theoretical results are fully supported and illustrated by both numerical simulations and thorough experimentation on real data

    Single View Modeling and View Synthesis

    Get PDF
    This thesis develops new algorithms to produce 3D content from a single camera. Today, amateurs can use hand-held camcorders to capture and display the 3D world in 2D, using mature technologies. However, there is always a strong desire to record and re-explore the 3D world in 3D. To achieve this goal, current approaches usually make use of a camera array, which suffers from tedious setup and calibration processes, as well as lack of portability, limiting its application to lab experiments. In this thesis, I try to produce the 3D contents using a single camera, making it as simple as shooting pictures. It requires a new front end capturing device rather than a regular camcorder, as well as more sophisticated algorithms. First, in order to capture the highly detailed object surfaces, I designed and developed a depth camera based on a novel technique called light fall-off stereo (LFS). The LFS depth camera outputs color+depth image sequences and achieves 30 fps, which is necessary for capturing dynamic scenes. Based on the output color+depth images, I developed a new approach that builds 3D models of dynamic and deformable objects. While the camera can only capture part of a whole object at any instance, partial surfaces are assembled together to form a complete 3D model by a novel warping algorithm. Inspired by the success of single view 3D modeling, I extended my exploration into 2D-3D video conversion that does not utilize a depth camera. I developed a semi-automatic system that converts monocular videos into stereoscopic videos, via view synthesis. It combines motion analysis with user interaction, aiming to transfer as much depth inferring work from the user to the computer. I developed two new methods that analyze the optical flow in order to provide additional qualitative depth constraints. The automatically extracted depth information is presented in the user interface to assist with user labeling work. In this thesis, I developed new algorithms to produce 3D contents from a single camera. Depending on the input data, my algorithm can build high fidelity 3D models for dynamic and deformable objects if depth maps are provided. Otherwise, it can turn the video clips into stereoscopic video
    • …
    corecore