138,538 research outputs found

    Reconstruction of high dynamic range images with poisson noise modeling and integrated denoising

    Get PDF
    In this paper, we present a new method for High Dynamic Range (HDR) reconstruction based on a set of multiple photographs with different exposure times. While most existing techniques take a deterministic approach by assuming that the acquired low dynamic range (LDR) images are noise-free, we explicitly model the photon arrival process by assuming sensor data corrupted by Poisson noise. Taking the noise characteristics of the sensor data into account leads to a more robust way to estimate the non-parametric camera response function (CRF) compared to existing techniques. To further improve the HDR reconstruction, we adopt the split-Bregman framework and use Total Variation for regularization. Experimental results on real camera images and ground-truth data show the effectiveness of the proposed approach

    Capturing Panoramic Depth Images with a Single Standard Camera

    Get PDF
    In this paper we present a panoramic depth imaging system. The system is mosaic-based which means that we use a single rotating camera and assemble the captured images in a mosaic. Due to a setoff of the camera’s optical center from the rotational center of the system we are able to capture the motion parallax effect which enables the stereo reconstruction. The camera is rotating on a circular path with the step defined by an angle equivalent to one column of the captured image. The equation for depth estimation can be easily extracted from system geometry. To find the corresponding points on a stereo pair of panoramic images the epipolar geometry needs to be determined. It can be shown that the epipolar geometry is very simple if we are doing the reconstruction based on a symmetric pair of stereo panoramic images. We get a symmetric pair of stereo panoramic images when we take symmetric columns on the left and on the right side from the captured image center column. Epipolar lines of the symmetrical pair of panoramic images are image rows. We focused mainly on the system analysis. The system performs well in the reconstruction of small indoor spaces

    Mosaiced-Based Panoramic Depth Imaging with a Single Standard Camera

    Get PDF
    In this article we present a panoramic depth imaging system. The system is mosaic-based which means that we use a single rotating camera and assemble the captured images in a mosaic. Due to a setoff of the camera’s optical center from the rotational center of the system we are able to capture the motion parallax effect which enables the stereo reconstruction. The camera is rotating on a circular path with the step defined by an angle, equivalent to one column of the captured image. The equation for depth estimation can be easily extracted from system geometry. To find the corresponding points on a stereo pair of panoramic images the epipolar geometry needs to be determined. It can be shown that the epipolar geometry is very simple if we are doing the reconstruction based on a symmetric pair of stereo panoramic images. We get a symmetric pair of stereo panoramic images when we take symmetric columns on the left and on the right side from the captured image center column. Epipolar lines of the symmetrical pair of panoramic images are image rows. We focused mainly on the system analysis. Results of the stereo reconstruction procedure and quality evaluation of generated depth images are quite promissing. The system performs well in the reconstruction of small indoor spaces. Our finall goal is to develop a system for automatic navigation of a mobile robot in a room

    Panoramic Depth Imaging: Single Standard Camera Approach

    Get PDF
    In this paper we present a panoramic depth imaging system. The system is mosaic-based which means that we use a single rotating camera and assemble the captured images in a mosaic. Due to a setoff of the camera’s optical center from the rotational center of the system we are able to capture the motion parallax effect which enables stereo reconstruction. The camera is rotating on a circular path with a step defined by the angle, equivalent to one pixel column of the captured image. The equation for depth estimation can be easily extracted from the system geometry. To find the corresponding points on a stereo pair of panoramic images the epipolar geometry needs to be determined. It can be shown that the epipolar geometry is very simple if we are doing the reconstruction based on a symmetric pair of stereo panoramic images. We get a symmetric pair of stereo panoramic images when we take symmetric pixel columns on the left and on the right side from the captured image center column. Epipolar lines of the symmetrical pair of panoramic images are image rows. The search space on the epipolar line can be additionaly constrained. The focus of the paper is mainly on the system analysis. Results of the stereo reconstruction procedure and quality evaluation of generated depth images are quite promissing. The system performs well for reconstruction of small indoor spaces. Our finall goal is to develop a system for automatic navigation of a mobile robot in a room

    Active vision for dexterous grasping of novel objects

    Get PDF
    How should a robot direct active vision so as to ensure reliable grasping? We answer this question for the case of dexterous grasping of unfamiliar objects. By dexterous grasping we simply mean grasping by any hand with more than two fingers, such that the robot has some choice about where to place each finger. Such grasps typically fail in one of two ways, either unmodeled objects in the scene cause collisions or object reconstruction is insufficient to ensure that the grasp points provide a stable force closure. These problems can be solved more easily if active sensing is guided by the anticipated actions. Our approach has three stages. First, we take a single view and generate candidate grasps from the resulting partial object reconstruction. Second, we drive the active vision approach to maximise surface reconstruction quality around the planned contact points. During this phase, the anticipated grasp is continually refined. Third, we direct gaze to improve the safety of the planned reach to grasp trajectory. We show, on a dexterous manipulator with a camera on the wrist, that our approach (80.4% success rate) outperforms a randomised algorithm (64.3% success rate).Comment: IROS 2016. Supplementary video: https://youtu.be/uBSOO6tMzw

    On-board and Ground Visual Pose Estimation Techniques for UAV Control

    Get PDF
    In this paper, two techniques to control UAVs (Unmanned Aerial Vehicles), based on visual information are presented. The first one is based on the detection and tracking of planar structures from an on-board camera, while the second one is based on the detection and 3D reconstruction of the position of the UAV based on an external camera system. Both strategies are tested with a VTOL (Vertical take-off and landing) UAV, and results show good behavior of the visual systems (precision in the estimation and frame rate) when estimating the helicopter¿s position and using the extracted information to control the UAV

    Shape from inconsistent silhouette: Reconstruction of objects in the presence of segmentation and camera calibration error

    Get PDF
    Silhouettes are useful features to reconstruct the object shape when the object is textureless or the shape classes of objects are unknown. In this dissertation, we explore the problem of reconstructing the shape of challenging objects from silhouettes under real-world conditions such as the presence of silhouette and camera calibration error. This problem is called the Shape from Inconsistent Silhouettes problem. A psuedo-Boolean cost function is formalized for this problem, which penalizes differences between the reconstruction images and the silhouette images, and the Shape from Inconsistent Silhouette problem is cast as a psuedo-Boolean minimization problem. We propose a memory and time efficient method to find a local minimum solution to the optimization problem, including heuristics that take into account the geometric nature of the problem. Our methods are demonstrated on a variety of challenging objects including humans and large, thin objects. We also compare our methods to the state-of-the-art by generating reconstructions of synthetic objects with induced error. ^ We also propose a method for correcting camera calibration error given silhouettes with segmentation error. Unlike other existing methods, our method allows camera calibration error to be corrected without camera placement constraints and allows for silhouette segmentation error. This is accomplished by a modified Iterative Closest Point algorithm which minimizes the difference between an initial reconstruction and the input silhouettes. We characterize the degree of error that can be corrected with synthetic datasets with increasing error, and demonstrate the ability of the camera calibration correction method in improving the reconstruction quality in several challenging real-world datasets

    Updating and Revising Star Camera for Future Flights of Balloon Borne Experiment

    Get PDF
    The BLAST (Balloon-borne Large Aperture Submillimeter Telescope) experiment surveys the galaxy from altitudes of 100,000 ft in order to answer important cosmological questions, such as how stars are formed. This experiment is conducted above Antarctica to minimize unwanted noise. Two star cameras are used in the navigation systems to identify known stars. The cameras take pictures and match stars in the image to known star positions from a catalog stored in the star camera\u27s computer. This is done using code written in C++, a computer programming language. In order to modernize the system, the code needs to be updated. A camera that has flown multiple missions was switched from a legacy codebase that was used in past missions, to the star tracking and attitude reconstruction (STARS) code, designed for the E and B Experiment (EBEX), a similar, balloon-borne experiment. This switch required cataloging the parts of the camera, testing the camera with the legacy code, and adapting the new code for this particular camera. The result is that the camera takes pictures and identifies stars using the new code. The next step is to suggest physical changes to the camera\u27s hardware to improve performance using the new code
    corecore