3,241 research outputs found

    Mosaiced-Based Panoramic Depth Imaging with a Single Standard Camera

    Get PDF
    In this article we present a panoramic depth imaging system. The system is mosaic-based which means that we use a single rotating camera and assemble the captured images in a mosaic. Due to a setoff of the camera’s optical center from the rotational center of the system we are able to capture the motion parallax effect which enables the stereo reconstruction. The camera is rotating on a circular path with the step defined by an angle, equivalent to one column of the captured image. The equation for depth estimation can be easily extracted from system geometry. To find the corresponding points on a stereo pair of panoramic images the epipolar geometry needs to be determined. It can be shown that the epipolar geometry is very simple if we are doing the reconstruction based on a symmetric pair of stereo panoramic images. We get a symmetric pair of stereo panoramic images when we take symmetric columns on the left and on the right side from the captured image center column. Epipolar lines of the symmetrical pair of panoramic images are image rows. We focused mainly on the system analysis. Results of the stereo reconstruction procedure and quality evaluation of generated depth images are quite promissing. The system performs well in the reconstruction of small indoor spaces. Our finall goal is to develop a system for automatic navigation of a mobile robot in a room

    Capturing Panoramic Depth Images with a Single Standard Camera

    Get PDF
    In this paper we present a panoramic depth imaging system. The system is mosaic-based which means that we use a single rotating camera and assemble the captured images in a mosaic. Due to a setoff of the camera’s optical center from the rotational center of the system we are able to capture the motion parallax effect which enables the stereo reconstruction. The camera is rotating on a circular path with the step defined by an angle equivalent to one column of the captured image. The equation for depth estimation can be easily extracted from system geometry. To find the corresponding points on a stereo pair of panoramic images the epipolar geometry needs to be determined. It can be shown that the epipolar geometry is very simple if we are doing the reconstruction based on a symmetric pair of stereo panoramic images. We get a symmetric pair of stereo panoramic images when we take symmetric columns on the left and on the right side from the captured image center column. Epipolar lines of the symmetrical pair of panoramic images are image rows. We focused mainly on the system analysis. The system performs well in the reconstruction of small indoor spaces

    Panoramic Depth Imaging: Single Standard Camera Approach

    Get PDF
    In this paper we present a panoramic depth imaging system. The system is mosaic-based which means that we use a single rotating camera and assemble the captured images in a mosaic. Due to a setoff of the camera’s optical center from the rotational center of the system we are able to capture the motion parallax effect which enables stereo reconstruction. The camera is rotating on a circular path with a step defined by the angle, equivalent to one pixel column of the captured image. The equation for depth estimation can be easily extracted from the system geometry. To find the corresponding points on a stereo pair of panoramic images the epipolar geometry needs to be determined. It can be shown that the epipolar geometry is very simple if we are doing the reconstruction based on a symmetric pair of stereo panoramic images. We get a symmetric pair of stereo panoramic images when we take symmetric pixel columns on the left and on the right side from the captured image center column. Epipolar lines of the symmetrical pair of panoramic images are image rows. The search space on the epipolar line can be additionaly constrained. The focus of the paper is mainly on the system analysis. Results of the stereo reconstruction procedure and quality evaluation of generated depth images are quite promissing. The system performs well for reconstruction of small indoor spaces. Our finall goal is to develop a system for automatic navigation of a mobile robot in a room

    Coordinates and maps of the Apollo 17 landing site

    Get PDF
    We carried out an extensive cartographic analysis of the Apollo 17 landing site and determined and mapped positions of the astronauts, their equipment, and lunar landmarks with accuracies of better than ±1 m in most cases. To determine coordinates in a lunar body‐fixed coordinate frame, we applied least squares (2‐D) network adjustments to angular measurements made in astronaut imagery (Hasselblad frames). The measured angular networks were accurately tied to lunar landmarks provided by a 0.5 m/pixel, controlled Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) orthomosaic of the entire Taurus‐Littrow Valley. Furthermore, by applying triangulation on measurements made in Hasselblad frames providing stereo views, we were able to relate individual instruments of the Apollo Lunar Surface Experiment Package (ALSEP) to specific features captured in LROC imagery and, also, to determine coordinates of astronaut equipment or other surface features not captured in the orbital images, for example, the deployed geophones and Explosive Packages (EPs) of the Lunar Seismic Profiling Experiment (LSPE) or the Lunar Roving Vehicle (LRV) at major sampling stops. Our results were integrated into a new LROC NAC‐based Apollo 17 Traverse Map and also used to generate a series of large‐scale maps of all nine traverse stations and of the ALSEP area. In addition, we provide crater measurements, profiles of the navigated traverse paths, and improved ranges of the sources and receivers of the active seismic experiment LSPE

    From ”Sapienza” to “Sapienza, State Archives in Rome”. A looping effect bringing back to the original source communication and culture by innovative and low cost 3D surveying, imaging systems and GIS applications

    Get PDF
    Applicazione di tecnologie mensorie integrate Low Cost,web GIS,applicazione di tecniche di Computational photography per la comunicazione e condivisione dei dati, sistemi di Cloud computing.Archiviazione Grandi DatiHigh Quality survey models, realized by multiple Low Cost methods and technologies, as a container to sharing Cultural and Archival Heritage, this is the aim guiding our research, here described in its primary applications. The SAPIENZA building, a XVI century masterpiece that represented the first unified headquarters of University in Rome, plays since year 1936, when the University moved to its newly edified campus, the role of the main venue for the State Archives. By the collaboration of a group of students of the Architecture Faculty, some integrated survey methods were applied on the monument with success. The beginning was the topographic survey, creating a reference on ground and along the monument for the upcoming applications, a GNNS RTK survey followed georeferencing points on the internal courtyard. Dense stereo matching photogrammetry is nowadays an accepted method for generating 3D survey models, accurate and scalable; it often substitutes 3D laser scanning for its low cost, so that it became our choice. Some 360°shots were planned for creating panoramic views of the double portico from the courtyard, plus additional single shots of some lateral spans and of pillars facing the court, as a single operation with a double finality: to create linked panotours with hotspots to web-linked databases, and 3D textured and georeferenced surface models, allowing to study the harmonic proportions of the classical architectural order. The use of free web Gis platforms, to load the work in Google Earth and the realization of low cost 3D prototypes of some representative parts, has been even performed

    Characterization of Energy and Performance Bottlenecks in an Omni-directional Camera System

    Get PDF
    abstract: Generating real-world content for VR is challenging in terms of capturing and processing at high resolution and high frame-rates. The content needs to represent a truly immersive experience, where the user can look around in 360-degree view and perceive the depth of the scene. The existing solutions only capture and offload the compute load to the server. But offloading large amounts of raw camera feeds takes longer latencies and poses difficulties for real-time applications. By capturing and computing on the edge, we can closely integrate the systems and optimize for low latency. However, moving the traditional stitching algorithms to battery constrained device needs at least three orders of magnitude reduction in power. We believe that close integration of capture and compute stages will lead to reduced overall system power. We approach the problem by building a hardware prototype and characterize the end-to-end system bottlenecks of power and performance. The prototype has 6 IMX274 cameras and uses Nvidia Jetson TX2 development board for capture and computation. We found that capturing is bottlenecked by sensor power and data-rates across interfaces, whereas compute is limited by the total number of computations per frame. Our characterization shows that redundant capture and redundant computations lead to high power, huge memory footprint, and high latency. The existing systems lack hardware-software co-design aspects, leading to excessive data transfers across the interfaces and expensive computations within the individual subsystems. Finally, we propose mechanisms to optimize the system for low power and low latency. We emphasize the importance of co-design of different subsystems to reduce and reuse the data. For example, reusing the motion vectors of the ISP stage reduces the memory footprint of the stereo correspondence stage. Our estimates show that pipelining and parallelization on custom FPGA can achieve real time stitching.Dissertation/ThesisPrototypeMasters Thesis Electrical Engineering 201

    Tools and Procedures for the CTA Array Calibration

    Get PDF
    The Cherenkov Telescope Array (CTA) is an international initiative to build the next generation ground-based very-high-energy gamma-ray observatory. Full sky coverage will be assured by two arrays, one located on each of the northern and southern hemispheres. Three different sizes of telescopes will cover a wide energy range from tens of GeV up to hundreds of TeV. These telescopes, of which prototypes are currently under construction or completion, will have different mirror sizes and fields-of-view designed to access different energy regimes. Additionally, there will be groups of telescopes with different optics system, camera and electronics design. Given this diversity of instruments, an overall coherent calibration of the full array is a challenging task. Moreover, the CTA requirements on calibration accuracy are much more stringent than those achieved with current Imaging Atmospheric Cherenkov Telescopes, like for instance: the systematic errors in the energy scale must not exceed 10%.In this contribution we present both the methods that, applied directly to the acquired observational CTA data, will ensure that the calibration is correctly performed to the stringent required precision, and the calibration equipment that, external to the telescopes, is currently under development and testing. Moreover, some notes about the operative procedure to be followed with both methods and instruments, will be described. The methods applied to the observational CTA data include the analysis of muon ring images, of carefully selected cosmic-ray air shower images, of the reconstructed electron spectrum and that of known gamma-ray sources and the possible use of stereo techniques hardware-independent. These methods will be complemented with the use of calibrated light sources located on ground or on board unmanned aerial vehicles.Comment: All CTA contributions at arXiv:1709.0348
    corecore