665 research outputs found

    Omni-Directional Catadioptric Acquisition System

    Get PDF
    An omni-directional catadioptric acquisition system (ODCA system) is provided to address the problem of producing real time, 360°, stereoscopic video of remote events for virtual reality (VR) viewing. The ODCA system is a video image-capture assembly that includes a cylinder with multiple apertures arranged around its circumference to admit light as the ODCA system rotates about a central axis. Inside the cylinder, there is a mirror on the left and right side of each aperture that reflects light rays into the cylinder from different angles. As the cylinder rotates, the light rays are admitted through the apertures and reflected from the two mirrors to a curved mirror in the center of the cylinder. This curved mirror directs the rays down through a catadioptric lens assembly, which focuses the rays onto another curved mirror near the bottom of the ODCA system. This second mirror reflects the rays to a set of line-scan image sensors arranged around the second mirror. The line-scan image sensors capture the rays for later reproduction as stereoscopic video

    Stereoscopic wide field of view imaging system

    Get PDF
    A stereoscopic imaging system incorporates a plurality of imaging devices or cameras to generate a high resolution, wide field of view image database from which images can be combined in real time to provide wide field of view or panoramic or omni-directional still or video images

    A Variational Wave Acquisition Stereo System for the 3-D Reconstruction of Oceanic Sea States

    Get PDF
    We propose a novel remote sensing technique that infers the three-dimensional wave form and radiance of oceanic sea states via a variational stereo imagery formulation. In this setting, the shape and radiance of the wave surface are minimizers of a composite cost functional which combines a data fidelity term and smoothness priors on the unknowns. The solution of a system of coupled partial differential equations derived from the cost functional yields the desired ocean surface shape and radiance. The proposed method is naturally extended to study the spatio-temporal dynamics of ocean waves, and applied to three sets of video data. Statistical and spectral analysis are carried out. The results shows evidence of the fact that the omni-directional wavenumber spectrum S(k) of the reconstructed waves decays as k^{-2.5} in agreement with Zakharov's theory (1999). Further, the three-dimensional spectrum of the reconstructed wave surface is exploited to estimate wave dispersion and currents

    Sensors, SLAM and Long-term Autonomy: A Review

    Get PDF
    Simultaneous Localization and Mapping, commonly known as SLAM, has been an active research area in the field of Robotics over the past three decades. For solving the SLAM problem, every robot is equipped with either a single sensor or a combination of similar/different sensors. This paper attempts to review, discuss, evaluate and compare these sensors. Keeping an eye on future, this paper also assesses the characteristics of these sensors against factors critical to the long-term autonomy challenge

    Characterization of Energy and Performance Bottlenecks in an Omni-directional Camera System

    Get PDF
    abstract: Generating real-world content for VR is challenging in terms of capturing and processing at high resolution and high frame-rates. The content needs to represent a truly immersive experience, where the user can look around in 360-degree view and perceive the depth of the scene. The existing solutions only capture and offload the compute load to the server. But offloading large amounts of raw camera feeds takes longer latencies and poses difficulties for real-time applications. By capturing and computing on the edge, we can closely integrate the systems and optimize for low latency. However, moving the traditional stitching algorithms to battery constrained device needs at least three orders of magnitude reduction in power. We believe that close integration of capture and compute stages will lead to reduced overall system power. We approach the problem by building a hardware prototype and characterize the end-to-end system bottlenecks of power and performance. The prototype has 6 IMX274 cameras and uses Nvidia Jetson TX2 development board for capture and computation. We found that capturing is bottlenecked by sensor power and data-rates across interfaces, whereas compute is limited by the total number of computations per frame. Our characterization shows that redundant capture and redundant computations lead to high power, huge memory footprint, and high latency. The existing systems lack hardware-software co-design aspects, leading to excessive data transfers across the interfaces and expensive computations within the individual subsystems. Finally, we propose mechanisms to optimize the system for low power and low latency. We emphasize the importance of co-design of different subsystems to reduce and reuse the data. For example, reusing the motion vectors of the ISP stage reduces the memory footprint of the stereo correspondence stage. Our estimates show that pipelining and parallelization on custom FPGA can achieve real time stitching.Dissertation/ThesisPrototypeMasters Thesis Electrical Engineering 201

    Visual Distortions in 360-degree Videos.

    Get PDF
    Omnidirectional (or 360°) images and videos are emergent signals being used in many areas, such as robotics and virtual/augmented reality. In particular, for virtual reality applications, they allow an immersive experience in which the user can interactively navigate through a scene with three degrees of freedom, wearing a head-mounted display. Current approaches for capturing, processing, delivering, and displaying 360° content, however, present many open technical challenges and introduce several types of distortions in the visual signal. Some of the distortions are specific to the nature of 360° images and often differ from those encountered in classical visual communication frameworks. This paper provides a first comprehensive review of the most common visual distortions that alter 360° signals going through the different processing elements of the visual communication pipeline. While their impact on viewers' visual perception and the immersive experience at large is still unknown-thus, it is an open research topic-this review serves the purpose of proposing a taxonomy of the visual distortions that can be encountered in 360° signals. Their underlying causes in the end-to-end 360° content distribution pipeline are identified. This taxonomy is essential as a basis for comparing different processing techniques, such as visual enhancement, encoding, and streaming strategies, and allowing the effective design of new algorithms and applications. It is also a useful resource for the design of psycho-visual studies aiming to characterize human perception of 360° content in interactive and immersive applications

    Localization of a mobile autonomous robot based on image analysis

    Get PDF
    This paper introduces an innovative method to solve the problem of self localization of a mobile autonomous robot, and in particular a case study is carried out for robot localization in a RoboCup field environment. The approach here described is completely different from other methods currently used in RoboCup, since it is only based on the use of images and does not involve the use of techniques like Monte Carlo or other probabilistic approaches. This method is simple, acceptably efficient for the purpose it was created, and uses a relatively low computational time to calculate.Fundação para a Ciência e a Tecnologia (FCT) - POSI/ROBO/43892/200

    Variational Stereo Imaging of Oceanic Waves with Statistical Constraints

    Get PDF
    An image processing observational technique for the stereoscopic reconstruction of the wave form of oceanic sea states is developed. The technique incorporates the enforcement of any given statistical wave law modeling the quasi Gaussianity of oceanic waves observed in nature. The problem is posed in a variational optimization framework, where the desired wave form is obtained as the minimizer of a cost functional that combines image observations, smoothness priors and a weak statistical constraint. The minimizer is obtained combining gradient descent and multigrid methods on the necessary optimality equations of the cost functional. Robust photometric error criteria and a spatial intensity compensation model are also developed to improve the performance of the presented image matching strategy. The weak statistical constraint is thoroughly evaluated in combination with other elements presented to reconstruct and enforce constraints on experimental stereo data, demonstrating the improvement in the estimation of the observed ocean surface
    • …
    corecore