303 research outputs found

    Single-shot layered reflectance separation using a polarized light field camera

    Get PDF
    We present a novel computational photography technique for single shot separation of diffuse/specular reflectance as well as novel angular domain separation of layered reflectance. Our solution consists of a two-way polarized light field (TPLF) camera which simultaneously captures two orthogonal states of polarization. A single photograph of a subject acquired with the TPLF camera under polarized illumination then enables standard separation of diffuse (depolarizing) and polarization preserving specular reflectance using light field sampling. We further demonstrate that the acquired data also enables novel angular separation of layered reflectance including separation of specular reflectance and single scattering in the polarization preserving component, and separation of shallow scattering from deep scattering in the depolarizing component. We apply our approach for efficient acquisition of facial reflectance including diffuse and specular normal maps, and novel separation of photometric normals into layered reflectance normals for layered facial renderings. We demonstrate our proposed single shot layered reflectance separation to be comparable to an existing multi-shot technique that relies on structured lighting while achieving separation results under a variety of illumination conditions

    Combining transverse field detectors and color filter arrays to improve multispectral imaging systems

    Get PDF
    This work focuses on the improvement of a multispectral imaging sensor based on transverse field detectors (TFDs). We aimed to achieve a higher color and spectral accuracy in the estimation of spectral reflectances from sensor responses. Such an improvement was done by combining these recently developed silicon-based sensors with color filter arrays (CFAs). Consequently, we sacrificed the filter-less full spatial resolution property of TFDs to narrow down the spectrally broad sensitivities of these sensors.We designed and performed several experiments to test the influence of different design features on the estimation quality (type of sensor, tunability, interleaved polarization, use of CFAs, type of CFAs, number of shots), some of which are exclusive to TFDs.We compared systems that use a TFD with systems that use normal monochrome sensors, both combined with multispectral CFAs as well as common RGB filters present in commercial digital color cameras. Results showed that a system that combines TFDs and CFAs performs better than systems with the same type of multispectral CFA and other sensors, or even the same TFDs combined with different kinds of filters used in common imaging systems. We propose CFA+TFD-based systems with one or two shots, depending on the possibility of using longer capturing times or not. Improved TFD systems thus emerge as an interesting possibility for multispectral acquisition, which overcomes the limited accuracy found in previous studies.Spanish Ministry of Economy and Competitiveness through the research project DPI2011-2320

    Light field super resolution through controlled micro-shifts of light field sensor

    Get PDF
    Light field cameras enable new capabilities, such as post-capture refocusing and aperture control, through capturing directional and spatial distribution of light rays in space. Micro-lens array based light field camera design is often preferred due to its light transmission efficiency, cost-effectiveness and compactness. One drawback of the micro-lens array based light field cameras is low spatial resolution due to the fact that a single sensor is shared to capture both spatial and angular information. To address the low spatial resolution issue, we present a light field imaging approach, where multiple light fields are captured and fused to improve the spatial resolution. For each capture, the light field sensor is shifted by a pre-determined fraction of a micro-lens size using an XY translation stage for optimal performance

    Neural Spectro-polarimetric Fields

    Full text link
    Modeling the spatial radiance distribution of light rays in a scene has been extensively explored for applications, including view synthesis. Spectrum and polarization, the wave properties of light, are often neglected due to their integration into three RGB spectral bands and their non-perceptibility to human vision. Despite this, these properties encompass substantial material and geometric information about a scene. In this work, we propose to model spectro-polarimetric fields, the spatial Stokes-vector distribution of any light ray at an arbitrary wavelength. We present Neural Spectro-polarimetric Fields (NeSpoF), a neural representation that models the physically-valid Stokes vector at given continuous variables of position, direction, and wavelength. NeSpoF manages inherently noisy raw measurements, showcases memory efficiency, and preserves physically vital signals, factors that are crucial for representing the high-dimensional signal of a spectro-polarimetric field. To validate NeSpoF, we introduce the first multi-view hyperspectral-polarimetric image dataset, comprised of both synthetic and real-world scenes. These were captured using our compact hyperspectral-polarimetric imaging system, which has been calibrated for robustness against system imperfections. We demonstrate the capabilities of NeSpoF on diverse scenes

    A review of dielectric optical metasurfaces for wavefront control

    Get PDF
    During the past few years, metasurfaces have been used to demonstrate optical elements and systems with capabilities that surpass those of conventional diffractive optics. Here, we review some of these recent developments, with a focus on dielectric structures for shaping optical wavefronts. We discuss the mechanisms for achieving steep phase gradients with high efficiency, simultaneous polarization and phase control, controlling the chromatic dispersion, and controlling the angular response. Then, we review applications in imaging, conformal optics, tunable devices, and optical systems. We conclude with an outlook on future potentials and challenges that need to be overcome

    A Vignetting Model for Light Field Cameras with an Application to Light Field Microscopy

    Get PDF
    International audienceIn standard photography, vignetting is considered mainly as a radiometric effect because it results in a darkening of the edges of the captured image. In this paper, we demonstrate that for light field cameras, vignetting is more than just a radio-metric effect. It modifies the properties of the acquired light field and renders most of the calibration procedures from the literature inadequate. We address the problem by describing a model-and camera-agnostic method to evaluate vignetting in phase space. This enables the synthesis of vignetted pixel values, that, applied to a range of pixels yield images corresponding to the white images that are customarily recorded for calibrating light field cameras. We show that the commonly assumed reference points for microlens-based systems are incorrect approximations to the true optical reference, i.e. the image of the center of the exit pupil. We introduce a novel calibration procedure to determine this optically correct reference point from experimental white images. We describe the changes vignetting imposes on the light field sampling patterns and, therefore, the optical properties of the corresponding virtual cameras using the ECA model [1] and apply these insights to a custom-built light field microscope

    Polarized 3D: High-Quality Depth Sensing with Polarization Cues

    Get PDF
    Coarse depth maps can be enhanced by using the shape information from polarization cues. We propose a framework to combine surface normals from polarization (hereafter polarization normals) with an aligned depth map. Polarization normals have not been used for depth enhancement before. This is because polarization normals suffer from physics-based artifacts, such as azimuthal ambiguity, refractive distortion and fronto-parallel signal degradation. We propose a framework to overcome these key challenges, allowing the benefits of polarization to be used to enhance depth maps. Our results demonstrate improvement with respect to state-of-the-art 3D reconstruction techniques.Charles Stark Draper Laboratory (Doctoral Fellowship)Singapore. Ministry of Education (Academic Research Foundation MOE2013-T2-1-159)Singapore. National Research Foundation (Singapore University of Technology and Design

    High-quality hyperspectral reconstruction using a spectral prior

    Get PDF
    We present a novel hyperspectral image reconstruction algorithm, which overcomes the long-standing tradeoff between spectral accuracy and spatial resolution in existing compressive imaging approaches. Our method consists of two steps: First, we learn nonlinear spectral representations from real-world hyperspectral datasets; for this, we build a convolutional autoencoder, which allows reconstructing its own input through its encoder and decoder networks. Second, we introduce a novel optimization method, which jointly regularizes the fidelity of the learned nonlinear spectral representations and the sparsity of gradients in the spatial domain, by means of our new fidelity prior. Our technique can be applied to any existing compressive imaging architecture, and has been thoroughly tested both in simulation, and by building a prototype hyperspectral imaging system. It outperforms the state-of-the-art methods from each architecture, both in terms of spectral accuracy and spatial resolution, while its computational complexity is reduced by two orders of magnitude with respect to sparse coding techniques. Moreover, we present two additional applications of our method: hyperspectral interpolation and demosaicing. Last, we have created a new high-resolution hyperspectral dataset containing sharper images of more spectral variety than existing ones, available through our project website

    High sensitivity active flat optics optical phased array receiver with a two-dimensional aperture

    Get PDF
    Optical phased arrays (OPAs) on integrated photonic platforms provide a low-cost chip-scale solution for many applications. Despite the numerous demonstrations of OPA transmitters, the realization of a functional OPA receiver presents a challenge due to the low received signal level in the presence of noise and interference that necessitates high sensitivity of the receiver. In this paper, an integrated receiver system is presented that is capable of on-chip adaptive manipulation and processing of the captured waveform. The receiver includes an optoelectronic mixer that down-converts optical signals to radio frequencies while maintaining their phase and amplitude information. The optoelectronic mixer also provides conversion gain that enhances the system sensitivity and its robustness to noise and interference. Using this system, the first OPA receiver with a two-dimensional aperture of 8-by-8 receiving elements is demonstrated which can selectively receive light from 64 different angles. The OPA receiver can form reception beams with a beamwidth of 0.75° over an 8° grating-lobe-free field of view
    corecore