31 research outputs found

    InSPECtor: an end-to-end design framework for compressive pixelated hyperspectral instruments

    Full text link
    Classic designs of hyperspectral instrumentation densely sample the spatial and spectral information of the scene of interest. Data may be compressed after the acquisition. In this paper we introduce a framework for the design of an optimized, micro-patterned snapshot hyperspectral imager that acquires an optimized subset of the spatial and spectral information in the scene. The data is thereby compressed already at the sensor level, but can be restored to the full hyperspectral data cube by the jointly optimized reconstructor. This framework is implemented with TensorFlow and makes use of its automatic differentiation for the joint optimization of the layout of the micro-patterned filter array as well as the reconstructor. We explore the achievable compression ratio for different numbers of filter passbands, number of scanning frames, and filter layouts using data collected by the Hyperscout instrument. We show resulting instrument designs that take snapshot measurements without losing significant information while reducing the data volume, acquisition time, or detector space by a factor of 40 as compared to classic, dense sampling. The joint optimization of a compressive hyperspectral imager design and the accompanying reconstructor provides an avenue to substantially reduce the data volume from hyperspectral imagers.Comment: 23 pages, 12 figures, published in Applied Optic

    Joint demosaicing and fusion of multiresolution coded acquisitions: A unified image formation and reconstruction method

    Full text link
    Novel optical imaging devices allow for hybrid acquisition modalities such as compressed acquisitions with locally different spatial and spectral resolutions captured by a single focal plane array. In this work, we propose to model the capturing system of a multiresolution coded acquisition (MRCA) in a unified framework, which natively includes conventional systems such as those based on spectral/color filter arrays, compressed coded apertures, and multiresolution sensing. We also propose a model-based image reconstruction algorithm performing a joint demosaicing and fusion (JoDeFu) of any acquisition modeled in the MRCA framework. The JoDeFu reconstruction algorithm solves an inverse problem with a proximal splitting technique and is able to reconstruct an uncompressed image datacube at the highest available spatial and spectral resolution. An implementation of the code is available at https://github.com/danaroth83/jodefu.Comment: 15 pages, 7 figures; regular pape

    Universal Demosaicking of Color Filter Arrays

    Get PDF
    A large number of color filter arrays (CFAs), periodic or aperiodic, have been proposed. To reconstruct images from all different CFAs and compare their imaging quality, a universal demosaicking method is needed. This paper proposes a new universal demosaicking method based on inter-pixel chrominance capture and optimal demosaicking transformation. It skips the commonly used step to estimate the luminance component at each pixel, and thus, avoids the associated estimation error. Instead, we directly use the acquired CFA color intensity at each pixel as an input component. Two independent chrominance components are estimated at each pixel based on the interpixel chrominance in the window, which is captured with the difference of CFA color values between the pixel of interest and its neighbors. Two mechanisms are employed for the accurate estimation: distance-related and edge-sensing weighting to reflect the confidence levels of the inter-pixel chrominance components, and pseudoinverse-based estimation from the components in a window. Then from the acquired CFA color component and two estimated chrominance components, the three primary colors are reconstructed by a linear color transform, which is optimized for the least transform error. Our experiments show that the proposed method is much better than other published universal demosaicking methods.National Key Basic Research Project of China (973 Program) [2015CB352303, 2011CB302400]; National Natural Science Foundation (NSF) of China [61071156, 61671027]SCI(E)[email protected]; [email protected]; [email protected]; [email protected]

    Snapshot Multispectral Imaging Using a Diffractive Optical Network

    Full text link
    Multispectral imaging has been used for numerous applications in e.g., environmental monitoring, aerospace, defense, and biomedicine. Here, we present a diffractive optical network-based multispectral imaging system trained using deep learning to create a virtual spectral filter array at the output image field-of-view. This diffractive multispectral imager performs spatially-coherent imaging over a large spectrum, and at the same time, routes a pre-determined set of spectral channels onto an array of pixels at the output plane, converting a monochrome focal plane array or image sensor into a multispectral imaging device without any spectral filters or image recovery algorithms. Furthermore, the spectral responsivity of this diffractive multispectral imager is not sensitive to input polarization states. Through numerical simulations, we present different diffractive network designs that achieve snapshot multispectral imaging with 4, 9 and 16 unique spectral bands within the visible spectrum, based on passive spatially-structured diffractive surfaces, with a compact design that axially spans ~72 times the mean wavelength of the spectral band of interest. Moreover, we experimentally demonstrate a diffractive multispectral imager based on a 3D-printed diffractive network that creates at its output image plane a spatially-repeating virtual spectral filter array with 2x2=4 unique bands at terahertz spectrum. Due to their compact form factor and computation-free, power-efficient and polarization-insensitive forward operation, diffractive multispectral imagers can be transformative for various imaging and sensing applications and be used at different parts of the electromagnetic spectrum where high-density and wide-area multispectral pixel arrays are not widely available.Comment: 24 Pages, 9 Figure

    Color Image Reconstruction via Sparse Signal Representation

    Get PDF
    In una macchina fotografica digitale, ogni unità fotosensibile misura solo una delle tre componenti di colore che la rappresentazione di un'immagine digitale a colori richiede. L'operazione di ricostruzione delle componenti mancanti è nota come demosaicing. In questa tesi si e' studiato e implementato un algoritmo di demosaicing recentemente proposto in letteratura, e basato sulla rappresentazione sparsa, attraverso un dizionario, delle immagini naturali acquisite dalla macchina fotografic

    Pansharpening of images acquired with color filter arrays

    Get PDF
    International audienceIn remote sensing, a common scenario involves the simultaneous acquisition of a panchromatic (PAN), a broad-band high spatial resolution image, and a multispectral (MS) image, which is composed of several spectral bands but at lower spatial resolution. The two sensors mounted on the same platform can be found in several very high spatial resolution optical remote sensing satellites for Earth observation (e.g., Quickbird, WorldView and SPOT) In this work we investigate an alternative acquisition strategy, which combines the information from both images into a single band image with the same number of pixels of the PAN. This operation allows to significantly reduce the burden of data downlink by achieving a fixed compression ratio of 1/(1 + b/ρ 2) compared to the conventional acquisition modes. Here, b and ρ denote the amount of distinct bands in the MS image and the scale ratio between the PAN and MS, respectively (e.g.: b = ρ = 4 as in many commercial high spatial resolution satellites). Many strategies can be conceived to generate such a compressed image from a given set of PAN and MS sources. A simple option, which will be presented here, is based on an application of the color filter array (CFA) theory. Specifically, the value of each pixel in the spatial support of the synthetic image is taken from the corresponding sample either in the PAN or in a given band of the MS upsampled to the size of the PAN. The choice is deterministic and done according to a custom mask. There are several works in the literature proposing various methods to construct masks which are able to preserve as much spectral content as possible for conventional RGB images. However, those results are not directly applicable to the case at hand, since it deals with i) images with different spatial resolution, ii) potentially more than three spectral bands and, iii) in general, different radiometric dynamics across bands. A tentative approach to address these issues is presented in this work. The compressed image resulting from the proposed acquisition strategy will be processed to generate an image featuring both the spatial resolution of the PAN and the spectral bands of the MS. This final product allows a direct comparison with the result of any standard pansharpening algorithm; the latter refers to a specific instance of data fusion (i.e., fusion of a PAN and MS image), which differs from our scenario since both sources are separately taken as input. In our setting, the fusion step performed at the ground segment will jointly involve a fusion and reconstruction problem (also known as demosaicing in the CFA literature). We propose to address this problem with a variational approach. We present in this work preliminary results related to the proposed scheme on real remote sensed images, tested over two different datasets acquired by the Quickbird and Geoeye-1 platforms, which show superior performances compared to applying a basic radiometric compression algorithm to both sources before performing a pansharpening protocol. The validation of the final products in both scenarios allows to visually and numerically appreciate the tradeoff between the compression of the source data and the quality loss suffered
    corecore