187 research outputs found

    A Compressive Multi-Mode Superresolution Display

    Get PDF
    Compressive displays are an emerging technology exploring the co-design of new optical device configurations and compressive computation. Previously, research has shown how to improve the dynamic range of displays and facilitate high-quality light field or glasses-free 3D image synthesis. In this paper, we introduce a new multi-mode compressive display architecture that supports switching between 3D and high dynamic range (HDR) modes as well as a new super-resolution mode. The proposed hardware consists of readily-available components and is driven by a novel splitting algorithm that computes the pixel states from a target high-resolution image. In effect, the display pixels present a compressed representation of the target image that is perceived as a single, high resolution image.Comment: Technical repor

    A compressive light field projection system

    Get PDF
    For about a century, researchers and experimentalists have strived to bring glasses-free 3D experiences to the big screen. Much progress has been made and light field projection systems are now commercially available. Unfortunately, available display systems usually employ dozens of devices making such setups costly, energy inefficient, and bulky. We present a compressive approach to light field synthesis with projection devices. For this purpose, we propose a novel, passive screen design that is inspired by angle-expanding Keplerian telescopes. Combined with high-speed light field projection and nonnegative light field factorization, we demonstrate that compressive light field projection is possible with a single device. We build a prototype light field projector and angle-expanding screen from scratch, evaluate the system in simulation, present a variety of results, and demonstrate that the projector can alternatively achieve super-resolved and high dynamic range 2D image display when used with a conventional screen.MIT Media Lab ConsortiumNatural Sciences and Engineering Research Council of Canada (NSERC Postdoctoral Fellowship)National Science Foundation (U.S.) (Grant NSF grant 0831281

    Dual-mode optical microscope based on single-pixel imaging

    Get PDF
    We demonstrate an inverted microscope that can image specimens in both reflection and transmission modes simultaneously with a single light source. The microscope utilizes a digital micromirror device (DMD) for patterned illumination altogether with two single-pixel photosensors for efficient light detection. The system, a scan-less device with no moving parts, works by sequential projection of a set of binary intensity patterns onto the sample that are codified onto a modified commercial DMD. Data to be displayed are geometrically transformed before written into a memory cell to cancel optical artifacts coming from the diamond-like shaped structure of the micromirror array. The 24-bit color depth of the display is fully exploited to increase the frame rate by a factor of 24, which makes the technique practicable for real samples. Our commercial DMD-based LED-illumination is cost effective and can be easily coupled as an add-on module for already existing inverted microscopes. The reflection and transmission information provided by our dual microscope complement each other and can be useful for imaging non-uniform samples and to prevent self-shadowing effects.This work was supported by MINECO through projects FIS2013-40666-P, the Generalitat Valenciana PROMETEO/2012/021, ISIC/2012/013, and by the Universitat Jaume I P1-1B2012-55. A.D. RodrĂ­guez acknowledges grant PREDOC/2012/41 from Universitat Jaume I. Thanks also to Dr. Tatiana Pina and Dr. Josep Jaques from Universitat Jaume I for providing us the biological samples

    Coded access optical sensor (CAOS) imager and applications

    Get PDF
    Starting in 2001, we proposed and extensively demonstrated (using a DMD: Digital Micromirror Device) an agile pixel Spatial Light Modulator (SLM)-based optical imager based on single pixel photo-detection (also called a single pixel camera) that is suited for operations with both coherent and incoherent light across broad spectral bands. This imager design operates with the agile pixels programmed in a limited SNR operations starring time-multiplexed mode where acquisition of image irradiance (i.e., intensity) data is done one agile pixel at a time across the SLM plane where the incident image radiation is present. Motivated by modern day advances in RF wireless, optical wired communications and electronic signal processing technologies and using our prior-art SLM-based optical imager design, described using a surprisingly simple approach is a new imager design called Coded Access Optical Sensor (CAOS) that has the ability to alleviate some of the key prior imager fundamental limitations. The agile pixel in the CAOS imager can operate in different time-frequency coding modes like Frequency Division Multiple Access (FDMA), Code-Division Multiple Access (CDMA), and Time Division Multiple Access (TDMA). Data from a first CAOS camera demonstration is described along with novel designs of CAOS-based optical instruments for various applications

    Light field image processing: an overview

    Get PDF
    Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data

    Image scanning lensless fiber-bundle endomicroscopy

    Full text link
    Fiber-based confocal endomicroscopy has shown great promise for minimally-invasive deep-tissue imaging. Despite its advantages, confocal fiber-bundle endoscopy inherently suffers from undersampling due to the spacing between fiber cores, and low collection efficiency when the target is not in proximity to the distal fiber facet. Here, we demonstrate an adaptation of image-scanning microscopy (ISM) to lensless fiber bundle endoscopy, doubling the spatial sampling frequency and significantly improving collection efficiency. Our approach only requires replacing the confocal detector with a camera. It improves the spatial resolution for targets placed at a distance from the fiber tip, and addresses the fundamental challenge of aliasing/pixelization artifacts

    Super-resolution imaging through a multimode fiber: the physical upsampling of speckle-driven

    Full text link
    Following recent advancements in multimode fiber (MMF), miniaturization of imaging endoscopes has proven crucial for minimally invasive surgery in vivo. Recent progress enabled by super-resolution imaging methods with a data-driven deep learning (DL) framework has balanced the relationship between the core size and resolution. However, most of the DL approaches lack attention to the physical properties of the speckle, which is crucial for reconciling the relationship between the magnification of super-resolution imaging and the quality of reconstruction quality. In the paper, we find that the interferometric process of speckle formation is an essential basis for creating DL models with super-resolution imaging. It physically realizes the upsampling of low-resolution (LR) images and enhances the perceptual capabilities of the models. The finding experimentally validates the role played by the physical upsampling of speckle-driven, effectively complementing the lack of information in data-driven. Experimentally, we break the restriction of the poor reconstruction quality at great magnification by inputting the same size of the speckle with the size of the high-resolution (HR) image to the model. The guidance of our research for endoscopic imaging may accelerate the further development of minimally invasive surgery

    Imaging for a Forward Scanning Automotive Synthetic Aperture Radar

    Get PDF
    • 

    corecore