146 research outputs found

    Computational Imaging for Shape Understanding

    Get PDF
    Geometry is the essential property of real-world scenes. Understanding the shape of the object is critical to many computer vision applications. In this dissertation, we explore using computational imaging approaches to recover the geometry of real-world scenes. Computational imaging is an emerging technique that uses the co-designs of image hardware and computational software to expand the capacity of traditional cameras. To tackle face recognition in the uncontrolled environment, we study 2D color image and 3D shape to deal with body movement and self-occlusion. Especially, we use multiple RGB-D cameras to fuse the varying pose and register the front face in a unified coordinate system. The deep color feature and geodesic distance feature have been used to complete face recognition. To handle the underwater image application, we study the angular-spatial encoding and polarization state encoding of light rays using computational imaging devices. Specifically, we use the light field camera to tackle the challenging problem of underwater 3D reconstruction. We leverage the angular sampling of the light field for robust depth estimation. We also develop a fast ray marching algorithm to improve the efficiency of the algorithm. To deal with arbitrary reflectance, we investigate polarimetric imaging and develop polarimetric Helmholtz stereopsis that uses reciprocal polarimetric image pairs for high-fidelity 3D surface reconstruction. We formulate new reciprocity and diffuse/specular polarimetric constraints to recover surface depths and normals using an optimization framework. To recover the 3D shape in the unknown and uncontrolled natural illumination, we use two circularly polarized spotlights to boost the polarization cues corrupted by the environment lighting, as well as to provide photometric cues. To mitigate the effect of uncontrolled environment light in photometric constraints, we estimate a lighting proxy map and iteratively refine the normal and lighting estimation. Through expensive experiments on the simulated and real images, we demonstrate that our proposed computational imaging methods outperform traditional imaging approaches

    SAR (Synthetic Aperture Radar). Earth observing system. Volume 2F: Instrument panel report

    Get PDF
    The scientific and engineering requirements for the Earth Observing System (EOS) imaging radar are provided. The radar is based on Shuttle Imaging Radar-C (SIR-C), and would include three frequencies: 1.25 GHz, 5.3 GHz, and 9.6 GHz; selectable polarizations for both transmit and receive channels; and selectable incidence angles from 15 to 55 deg. There would be three main viewing modes: a local high-resolution mode with typically 25 m resolution and 50 km swath width; a regional mapping mode with 100 m resolution and up to 200 km swath width; and a global mapping mode with typically 500 m resolution and up to 700 km swath width. The last mode allows global coverage in three days. The EOS SAR will be the first orbital imaging radar to provide multifrequency, multipolarization, multiple incidence angle observations of the entire Earth. Combined with Canadian and Japanese satellites, continuous radar observation capability will be possible. Major applications in the areas of glaciology, hydrology, vegetation science, oceanography, geology, and data and information systems are described
    • …
    corecore