10 research outputs found

    Image Quality Is Not All You Want: Task-Driven Lens Design for Image Classification

    Full text link
    In computer vision, it has long been taken for granted that high-quality images obtained through well-designed camera lenses would lead to superior results. However, we find that this common perception is not a "one-size-fits-all" solution for diverse computer vision tasks. We demonstrate that task-driven and deep-learned simple optics can actually deliver better visual task performance. The Task-Driven lens design approach, which relies solely on a well-trained network model for supervision, is proven to be capable of designing lenses from scratch. Experimental results demonstrate the designed image classification lens (``TaskLens'') exhibits higher accuracy compared to conventional imaging-driven lenses, even with fewer lens elements. Furthermore, we show that our TaskLens is compatible with various network models while maintaining enhanced classification accuracy. We propose that TaskLens holds significant potential, particularly when physical dimensions and cost are severely constrained.Comment: Use an image classification network to supervise the lens design from scratch. The final designs can achieve higher accuracy with fewer optical element

    Computational Spectral Imaging: A Contemporary Overview

    Full text link
    Spectral imaging collects and processes information along spatial and spectral coordinates quantified in discrete voxels, which can be treated as a 3D spectral data cube. The spectral images (SIs) allow identifying objects, crops, and materials in the scene through their spectral behavior. Since most spectral optical systems can only employ 1D or maximum 2D sensors, it is challenging to directly acquire the 3D information from available commercial sensors. As an alternative, computational spectral imaging (CSI) has emerged as a sensing tool where the 3D data can be obtained using 2D encoded projections. Then, a computational recovery process must be employed to retrieve the SI. CSI enables the development of snapshot optical systems that reduce acquisition time and provide low computational storage costs compared to conventional scanning systems. Recent advances in deep learning (DL) have allowed the design of data-driven CSI to improve the SI reconstruction or, even more, perform high-level tasks such as classification, unmixing, or anomaly detection directly from 2D encoded projections. This work summarises the advances in CSI, starting with SI and its relevance; continuing with the most relevant compressive spectral optical systems. Then, CSI with DL will be introduced, and the recent advances in combining the physical optical design with computational DL algorithms to solve high-level tasks

    Snapshot Multispectral Imaging Using a Diffractive Optical Network

    Full text link
    Multispectral imaging has been used for numerous applications in e.g., environmental monitoring, aerospace, defense, and biomedicine. Here, we present a diffractive optical network-based multispectral imaging system trained using deep learning to create a virtual spectral filter array at the output image field-of-view. This diffractive multispectral imager performs spatially-coherent imaging over a large spectrum, and at the same time, routes a pre-determined set of spectral channels onto an array of pixels at the output plane, converting a monochrome focal plane array or image sensor into a multispectral imaging device without any spectral filters or image recovery algorithms. Furthermore, the spectral responsivity of this diffractive multispectral imager is not sensitive to input polarization states. Through numerical simulations, we present different diffractive network designs that achieve snapshot multispectral imaging with 4, 9 and 16 unique spectral bands within the visible spectrum, based on passive spatially-structured diffractive surfaces, with a compact design that axially spans ~72 times the mean wavelength of the spectral band of interest. Moreover, we experimentally demonstrate a diffractive multispectral imager based on a 3D-printed diffractive network that creates at its output image plane a spatially-repeating virtual spectral filter array with 2x2=4 unique bands at terahertz spectrum. Due to their compact form factor and computation-free, power-efficient and polarization-insensitive forward operation, diffractive multispectral imagers can be transformative for various imaging and sensing applications and be used at different parts of the electromagnetic spectrum where high-density and wide-area multispectral pixel arrays are not widely available.Comment: 24 Pages, 9 Figure

    On design of hybrid diffractive optics for achromatic extended depth-of-field (EDoF) RGB imaging

    Get PDF
    A hybrid imaging system is a simultaneous physical arrangement of a refractive lens and a multilevel phase mask (MPM) as a diffractive optical element (DOE). The favorable properties of the hybrid setup are improved extended-depth-of-field (EDoF) imaging and low chromatic aberrations. We built a fully differentiable image formation model in order to use neural network techniques to optimize imaging. At the first stage, the design framework relies on the model-based approach with numerical simulation and end-to-end joint optimization of both MPM and imaging algorithms. In the second stage, MPM is fixed as found at the first stage, and the image processing is optimized experimentally using the CNN learning-based approach with MPM implemented by a spatial light modulator. The paper is concentrated on a comparative analysis of imaging accuracy and quality for design with various basic optical parameters: aperture size, lens focal length, and distance between MPM and sensor. We point out that the varying aperture size, lens focal length, and distance between MPM and sensor are for the first time considered for end-to-end optimization of EDoF. We numerically and experimentally compare the designs for visible wavelength interval [400-700]nm and the following EDoF ranges: [0.5-100]m for simulations and [0.5-1.9]m for experimental tests. This study concerns an application of hybrid optics for compact cameras with aperture [5-9] mm and distance between MPM and sensor [3-10]mm.Comment: 16 pages, 11 figures, 1 tabl

    Power-Balanced Hybrid Optics Boosted Design for Achromatic Extended-Depth-of-Field Imaging via Optimized Mixed OTF

    Get PDF
    The power-balanced hybrid optical imaging system is a special design of a diffractive computational camera, introduced in this paper, with image formation by a refractive lens and Multilevel Phase Mask (MPM). This system provides a long focal depth with low chromatic aberrations thanks to MPM and a high energy light concentration due to the refractive lens. We introduce the concept of optical power balance between the lens and MPM which controls the contribution of each element to modulate the incoming light. Additional unique features of our MPM design are the inclusion of quantization of the MPM's shape on the number of levels and the Fresnel order (thickness) using a smoothing function. To optimize optical power-balance as well as the MPM, we build a fully-differentiable image formation model for joint optimization of optical and imaging parameters for the proposed camera using Neural Network techniques. Additionally, we optimize a single Wiener-like optical transfer function (OTF) invariant to depth to reconstruct a sharp image. We numerically and experimentally compare the designed system with its counterparts, lensless and just-lens optical systems, for the visible wavelength interval (400-700)nm and the depth-of-field range (0.5-\inftym for numerical and 0.5-2m for experimental). The attained results demonstrate that the proposed system equipped with the optimal OTF overcomes its counterparts (even when they are used with optimized OTF) in terms of reconstruction quality for off-focus distances. The simulation results also reveal that optimizing the optical power-balance, Fresnel order, and the number of levels parameters are essential for system performance attaining an improvement of up to 5dB of PSNR using the optimized OTF compared with its counterpart lensless setup.Comment: 18 pages, 14 figure

    On design of hybrid diffractive optics for achromatic extended depth-of-field (EDoF) RGB imaging

    Get PDF
    A hybrid imaging system is a simultaneous physical arrangement of a refractive lens and a multilevel phase mask (MPM) as a diffractive optical element (DOE). The favorable properties of the hybrid setup are improved extended-depth-of-field (EDoF) imaging and low chromatic aberrations. We built a fully differentiable image formation model in order to use neural network techniques to optimize imaging. At the first stage, the design framework relies on the model-based approach with numerical simulation and end-to-end joint optimization of both MPM and imaging algorithms. In the second stage, MPM is fixed as found at the first stage, and the image processing is optimized experimentally using the CNN learning-based approach with MPM implemented by a spatial light modulator. The paper is concentrated on a comparative analysis of imaging accuracy and quality for design with various basic optical parameters: aperture size, lens focal length, and distance between MPM and sensor. We point out that the varying aperture size, lens focal length, and distance between MPM and sensor are for the first time considered for end-to-end optimization of EDoF. We numerically and experimentally compare the designs for visible wavelength interval [400-700] nm and the following EDoF ranges: [0.5-100] m for simulations and [0.5-1.9] m for experimental tests. This study concerns an application of hybrid optics for compact cameras with aperture [5-9] mm and distance between MPM and sensor [3-10] mm.publishedVersionPeer reviewe

    Compact Snapshot Hyperspectral Imaging with Diffracted Rotation

    No full text
    Traditional snapshot hyperspectral imaging systems include various optical elements: a dispersive optical element (prism), a coded aperture, several relay lenses, and an imaging lens, resulting in an impractically large form factor. We seek an alternative, minimal form factor of snapshot spectral imaging based on recent advances in diffractive optical technology. We thereupon present a compact, diffraction-based snapshot hyperspectral imaging method, using only a novel diffractive optical element (DOE) in front of a conventional, bare image sensor. Our diffractive imaging method replaces the common optical elements in hyperspectral imaging with a single optical element. To this end, we tackle two main challenges: First, the traditional diffractive lenses are not suitable for color imaging under incoherent illumination due to severe chromatic aberration because the size of the point spread function (PSF) changes depending on the wavelength. By leveraging this wavelength-dependent property alternatively for hyperspectral imaging, we introduce a novel DOE design that generates an anisotropic shape of the spectrally-varying PSF. The PSF size remains virtually unchanged, but instead the PSF shape rotates as the wavelength of light changes. Second, since there is no dispersive element and no coded aperture mask, the ill-posedness of spectral reconstruction increases significantly. Thus, we propose an end-to-end network solution based on the unrolled architecture of an optimization procedure with a spatial-spectral prior, specifically designed for deconvolution-based spectral reconstruction. Finally, we demonstrate hyperspectral imaging with a fabricated DOE attached to a conventional DSLR sensor. Results show that our method compares well with other state-of-the-art hyperspectral imaging methods in terms of spectral accuracy and spatial resolution, while our compact, diffraction-based spectral imaging method uses only a single optical element on a bare image sensor.11Nsciescopu
    corecore