676 research outputs found
Simultaneous real-time visible and infrared video with single-pixel detectors
Conventional cameras rely upon a pixelated sensor to provide spatial resolution. An alternative approach replaces the sensor with a pixelated transmission mask encoded with a series of binary patterns. Combining knowledge of the series of patterns and the associated filtered intensities, measured by single-pixel detectors, allows an image to be deduced through data inversion. In this work we extend the concept of a āsingle-pixel cameraā to provide continuous real-time video at 10āHz , simultaneously in the visible and short-wave infrared, using an efficient computer algorithm. We demonstrate our camera for imaging through smoke, through a tinted screen, whilst performing compressive sampling and recovering high-resolution detail by arbitrarily controlling the pixel-binning of the masks. We anticipate real-time single-pixel video cameras to have considerable importance where pixelated sensors are limited, allowing for low-cost, non-visible imaging systems in applications such as night-vision, gas sensing and medical diagnostics
Recommended from our members
Computational Cameras: Approaches, Benefits and Limits
A computational camera uses a combination of optics and software to produce images that cannot be taken with traditional cameras. In the last decade, computational imaging has emerged as a vibrant field of research. A wide variety of computational cameras have been demonstrated - some designed to achieve new imaging functionalities and others to reduce the complexity of traditional imaging. In this article, we describe how computational cameras have evolved and present a taxonomy for the technical approaches they use. We explore the benefits and limits of computational imaging, and describe how it is related to the adjacent and overlapping fields of digital imaging, computational photography and computational image sensors
A review of snapshot multidimensional optical imaging: Measuring photon tags in parallel
Multidimensional optical imaging has seen remarkable growth in the past decade. Rather than measuring only the two-dimensional spatial distribution of light, as in conventional photography, multidimensional optical imaging captures light in up to nine dimensions, providing unprecedented information about incident photonsā spatial coordinates, emittance angles, wavelength, time, and polarization. Multidimensional optical imaging can be accomplished either by scanning or parallel acquisition. Compared with scanning-based imagers, parallel acquisitionāalso dubbed snapshot imagingāhas a prominent advantage in maximizing optical throughput, particularly when measuring a datacube of high dimensions. Here, we first categorize snapshot multidimensional imagers based on their acquisition and image reconstruction strategies, then highlight the snapshot advantage in the context of optical throughput, and finally we discuss their state-of-the-art implementations and applications
Aperture Diffraction for Compact Snapshot Spectral Imaging
We demonstrate a compact, cost-effective snapshot spectral imaging system
named Aperture Diffraction Imaging Spectrometer (ADIS), which consists only of
an imaging lens with an ultra-thin orthogonal aperture mask and a mosaic filter
sensor, requiring no additional physical footprint compared to common RGB
cameras. Then we introduce a new optical design that each point in the object
space is multiplexed to discrete encoding locations on the mosaic filter sensor
by diffraction-based spatial-spectral projection engineering generated from the
orthogonal mask. The orthogonal projection is uniformly accepted to obtain a
weakly calibration-dependent data form to enhance modulation robustness.
Meanwhile, the Cascade Shift-Shuffle Spectral Transformer (CSST) with strong
perception of the diffraction degeneration is designed to solve a
sparsity-constrained inverse problem, realizing the volume reconstruction from
2D measurements with Large amount of aliasing. Our system is evaluated by
elaborating the imaging optical theory and reconstruction algorithm with
demonstrating the experimental imaging under a single exposure. Ultimately, we
achieve the sub-super-pixel spatial resolution and high spectral resolution
imaging. The code will be available at: https://github.com/Krito-ex/CSST.Comment: accepted by International Conference on Computer Vision (ICCV) 202
Compact single-shot hyperspectral imaging using a prism
We present a novel, compact single-shot hyperspectral imaging method. It enables capturing hyperspectral images using a conventional DSLR camera equipped with just an ordinary refractive prism in front of the camera lens. Our computational imaging method reconstructs the full spectral information of a scene from dispersion over edges. Our setup requires no coded aperture mask, no slit, and no collimating optics, which are necessary for traditional hyperspectral imaging systems. It is thus very cost-effective, while still highly accurate. We tackle two main problems: First, since we do not rely on collimation, the sensor records a projection of the dispersion information, distorted by perspective. Second, available spectral cues are sparse, present only around object edges. We formulate an image formation model that can predict the perspective projection of dispersion, and a reconstruction method that can estimate the full spectral information of a scene from sparse dispersion information. Our results show that our method compares well with other state-of-the-art hyperspectral imaging systems, both in terms of spectral accuracy and spatial resolution, while being orders of magnitude cheaper than commercial imaging systems
Multispectral iris recognition analysis: Techniques and evaluation
This thesis explores the benefits of using multispectral iris information acquired using a narrow-band multispectral imaging system. Commercial iris recognition systems typically sense the iridal reflection pertaining to the near-infrared (IR) range of the electromagnetic spectrum. While near-infrared imaging does give a very reasonable image of the iris texture, it only exploits a narrow band of spectral information. By incorporating other wavelength ranges (infrared, red, green, blue) in iris recognition systems, the reflectance and absorbance properties of the iris tissue can be exploited to enhance recognition performance. Furthermore, the impact of eye color on iris matching performance can be determined. In this work, a multispectral iris image acquisition system was assembled in order to procure data from human subjects. Multispectral images pertaining to 70 different eyes (35 subjects) were acquired using this setup. Three different iris localization algorithms were developed in order to isolate the iris information from the acquired images. While the first technique relied on the evidence presented by a single spectral channel (viz., near-infrared), the other two techniques exploited the information represented in multiple channels. Experimental results confirm the benefits of utilizing multiple channel information for iris segmentation. Next, an image enhancement technique using the CIE L*a*b* histogram equalization method was designed to improve the quality of the multispectral images. Further, a novel encoding method based on normalized pixel intensities was developed to represent the segmented iris images. The proposed encoding algorithm, when used in conjunction with the traditional texture-based scheme, was observed to result in very good matching performance. The work also explored the matching interoperability of iris images across multiple channels. This thesis clearly asserts the benefits of multispectral iris processing, and provides a foundation for further research in this topic
- ā¦