1,013 research outputs found

    Supernova / Acceleration Probe: A Satellite Experiment to Study the Nature of the Dark Energy

    Full text link
    The Supernova / Acceleration Probe (SNAP) is a proposed space-based experiment designed to study the dark energy and alternative explanations of the acceleration of the Universe's expansion by performing a series of complementary systematics-controlled measurements. We describe a self-consistent reference mission design for building a Type Ia supernova Hubble diagram and for performing a wide-area weak gravitational lensing study. A 2-m wide-field telescope feeds a focal plane consisting of a 0.7 square-degree imager tiled with equal areas of optical CCDs and near infrared sensors, and a high-efficiency low-resolution integral field spectrograph. The SNAP mission will obtain high-signal-to-noise calibrated light-curves and spectra for several thousand supernovae at redshifts between z=0.1 and 1.7. A wide-field survey covering one thousand square degrees resolves ~100 galaxies per square arcminute. If we assume we live in a cosmological-constant-dominated Universe, the matter density, dark energy density, and flatness of space can all be measured with SNAP supernova and weak-lensing measurements to a systematics-limited accuracy of 1%. For a flat universe, the density-to-pressure ratio of dark energy can be similarly measured to 5% for the present value w0 and ~0.1 for the time variation w'. The large survey area, depth, spatial resolution, time-sampling, and nine-band optical to NIR photometry will support additional independent and/or complementary dark-energy measurement approaches as well as a broad range of auxiliary science programs. (Abridged)Comment: 40 pages, 18 figures, submitted to PASP, http://snap.lbl.go

    Catadioptric stereo-vision system using a spherical mirror

    Get PDF
    Abstract In the computer vision field, the reconstruction of target surfaces is usually achieved by using 3D optical scanners assembled integrating digital cameras and light emitters. However, these solutions are limited by the low field of view, which requires multiple acquisition from different views to reconstruct complex free-form geometries. The combination of mirrors and lenses (catadioptric systems) can be adopted to overcome this issue. In this work, a stereo catadioptric optical scanner has been developed by assembling two digital cameras, a spherical mirror and a multimedia white light projector. The adopted configuration defines a non-single viewpoint system, thus a non-central catadioptric camera model has been developed. An analytical solution to compute the projection of a scene point onto the image plane (forward projection) and vice-versa (backward projection) is presented. The proposed optical setup allows omnidirectional stereo vision thus allowing the reconstruction of target surfaces with a single acquisition. Preliminary results, obtained measuring a hollow specimen, demonstrated the effectiveness of the described approach

    Vision-based Navigation and Mapping Using Non-central Catadioptric Omnidirectional Camera

    Get PDF
    Omnidirectional catadioptric cameras find their use in navigation and mapping, owing to their wide field of view. Having a wider field of view, or rather a potential 360 degree field of view, allows the user to see and move more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The position of the system was determined, for an environment using the conditions obtained from the reflective properties of the mirror. Object control points were set up and experiments were performed at different sites to test the mathematical models and the achieved location and mapping accuracy of the system. The obtained positions were then used to map the environment

    Panoramic Stereovision and Scene Reconstruction

    Get PDF
    With advancement of research in robotics and computer vision, an increasingly high number of applications require the understanding of a scene in three dimensions. A variety of systems are deployed to do the same. This thesis explores a novel 3D imaging technique. This involves the use of catadioptric cameras in a stereoscopic arrangement. A secondary system aims to stabilize the system in the event that the cameras are misaligned during operation. The system provides a stark advantage due to it being a cost effective alternative to present day standard state-of-the-art systems that achieve the same goal of 3D imaging. The compromise lies in the quality of depth estimation, which can be overcome with a different imager and calibration. The result was a panoramic disparity map generated by the system

    Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs)

    Full text link
    We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances
    • …
    corecore