774 research outputs found

    Panoramic Cameras I\u27ve Made

    Get PDF
    Brief history of one photographer\u27s work in the area of panoramic cameras and strip enlargers

    Capturing Panoramic Depth Images with a Single Standard Camera

    Get PDF
    In this paper we present a panoramic depth imaging system. The system is mosaic-based which means that we use a single rotating camera and assemble the captured images in a mosaic. Due to a setoff of the camera’s optical center from the rotational center of the system we are able to capture the motion parallax effect which enables the stereo reconstruction. The camera is rotating on a circular path with the step defined by an angle equivalent to one column of the captured image. The equation for depth estimation can be easily extracted from system geometry. To find the corresponding points on a stereo pair of panoramic images the epipolar geometry needs to be determined. It can be shown that the epipolar geometry is very simple if we are doing the reconstruction based on a symmetric pair of stereo panoramic images. We get a symmetric pair of stereo panoramic images when we take symmetric columns on the left and on the right side from the captured image center column. Epipolar lines of the symmetrical pair of panoramic images are image rows. We focused mainly on the system analysis. The system performs well in the reconstruction of small indoor spaces

    Parallel proccessing applied to object detection with a Jetson TX2 embedded system.

    Get PDF
    Video streams from panoramic cameras represent a powerful tool for automated surveillance systems, but naïve implementations typically require very intensive computational loads for applying deep learning models for automated detection and tracking of objects of interest, since these models require relatively high resolution to reliably perform object detection. In this paper, we report a host of improvements to our previous state-of-the-art software system to reliably detect and track objects in video streams from panoramic cameras, resulting in an increase in the processing framerate in a Jetson TX2 board, with respect to our previous results. Depending on the number of processes and the load profile, we observe up to a five-fold increase in the framerate.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Panoramic 360◦ videos in virtual reality using two lenses and a mobile phone

    Get PDF
    Cameras generally have a 60◦ field of view of and can capture only a portion of their surroundings. Panoramic cameras are used to capture the entire 360◦ view known as panoramic images. Virtual reality makes use of these panoramic images to provide a more immersive experience compared to seeing images on a 2D screen. Most of the panoramic cameras are expensive. It is important for the camera to be affordable in order for virtual reality to become a part of daily life. This is a comprehensive document about the successful implementation of the cheapest 360◦ video camera, using multiple lenses on a mobile phone. With the advent of technology nearly everyone has a mobile phone. Equipping these mobile phones with the technology to capture panoramic images using multiple lenses will convert them into the most economical panoramic camera

    Panoramic Annular Localizer: Tackling the Variation Challenges of Outdoor Localization Using Panoramic Annular Images and Active Deep Descriptors

    Full text link
    Visual localization is an attractive problem that estimates the camera localization from database images based on the query image. It is a crucial task for various applications, such as autonomous vehicles, assistive navigation and augmented reality. The challenging issues of the task lie in various appearance variations between query and database images, including illumination variations, dynamic object variations and viewpoint variations. In order to tackle those challenges, Panoramic Annular Localizer into which panoramic annular lens and robust deep image descriptors are incorporated is proposed in this paper. The panoramic annular images captured by the single camera are processed and fed into the NetVLAD network to form the active deep descriptor, and sequential matching is utilized to generate the localization result. The experiments carried on the public datasets and in the field illustrate the validation of the proposed system.Comment: Accepted by ITSC 201

    Vision-Based Navigation III: Pose and Motion from Omnidirectional Optical Flow and a Digital Terrain Map

    Full text link
    An algorithm for pose and motion estimation using corresponding features in omnidirectional images and a digital terrain map is proposed. In previous paper, such algorithm for regular camera was considered. Using a Digital Terrain (or Digital Elevation) Map (DTM/DEM) as a global reference enables recovering the absolute position and orientation of the camera. In order to do this, the DTM is used to formulate a constraint between corresponding features in two consecutive frames. In this paper, these constraints are extended to handle non-central projection, as is the case with many omnidirectional systems. The utilization of omnidirectional data is shown to improve the robustness and accuracy of the navigation algorithm. The feasibility of this algorithm is established through lab experimentation with two kinds of omnidirectional acquisition systems. The first one is polydioptric cameras while the second is catadioptric camera.Comment: 6 pages, 9 figure

    Basics of strip enlargers

    Get PDF
    Enlarging long negatives produced by rotating panoramic cameras capable of 360 degree coverage, as well as photofinish and peripheral cameras is a problem because of their size. This article describes the design and construction of an enlarger capable of making prints hundreds of feet long

    Linear-Strip Photographs Using Cirkut and Hulcher Cameras

    Get PDF
    This article describes the process by which cameras originally designed for use as rotating, 360 degree coverage, panoramic cameras can be used for applications such as photographing the full length of a passing train or the facades of all the buildings along a street

    Rock Segmentation through Edge Regrouping

    Get PDF
    Rockster is an algorithm that automatically identifies the locations and boundaries of rocks imaged by the rover hazard cameras (hazcams), navigation cameras (navcams), or panoramic cameras (pancams). The software uses edge detection and edge regrouping to identify closed contours that separate the rocks from the background
    corecore