1,892 research outputs found

    Spatio-Spectral Sampling and Color Filter Array Design

    Get PDF
    Owing to the growing ubiquity of digital image acquisition and display, several factors must be considered when developing systems to meet future color image processing needs, including improved quality, increased throughput, and greater cost-effectiveness. In consumer still-camera and video applications, color images are typically obtained via a spatial subsampling procedure implemented as a color filter array (CFA), a physical construction whereby only a single component of the color space is measured at each pixel location. Substantial work in both industry and academia has been dedicated to post-processing this acquired raw image data as part of the so-called image processing pipeline, including in particular the canonical demosaicking task of reconstructing a full-color image from the spatially subsampled and incomplete data acquired using a CFA. However, as we detail in this chapter, the inherent shortcomings of contemporary CFA designs mean that subsequent processing steps often yield diminishing returns in terms of image quality. For example, though distortion may be masked to some extent by motion blur and compression, the loss of image quality resulting from all but the most computationally expensive state-of-the-art methods is unambiguously apparent to the practiced eye. … As the CFA represents one of the first steps in the image acquisition pipeline, it largely determines the maximal resolution and computational efficiencies achievable by subsequent processing schemes. Here, we show that the attainable spatial resolution yielded by a particular choice of CFA is quantifiable and propose new CFA designs to maximize it. In contrast to the majority of the demosaicking literature, we explicitly consider the interplay between CFA design and properties of typical image data and its implications for spatial reconstruction quality. Formally, we pose the CFA design problem as simultaneously maximizing the allowable spatio-spectral support of luminance and chrominance channels, subject to a partitioning requirement in the Fourier representation of the sensor data. This classical aliasing-free condition preserves the integrity of the color image data and thereby guarantees exact reconstruction when demosaicking is implemented as demodulation (demultiplexing in frequency)

    Optical Intensity Interferometry with the Cherenkov Telescope Array

    Full text link
    With its unprecedented light-collecting area for night-sky observations, the Cherenkov Telescope Array (CTA) holds great potential for also optical stellar astronomy, in particular as a multi-element intensity interferometer for realizing imaging with sub-milliarcsecond angular resolution. Such an order-of-magnitude increase of the spatial resolution achieved in optical astronomy will reveal the surfaces of rotationally flattened stars with structures in their circumstellar disks and winds, or the gas flows between close binaries. Image reconstruction is feasible from the second-order coherence of light, measured as the temporal correlations of arrival times between photons recorded in different telescopes. This technique (once pioneered by Hanbury Brown and Twiss) connects telescopes only with electronic signals and is practically insensitive to atmospheric turbulence and to imperfections in telescope optics. Detector and telescope requirements are very similar to those for imaging air Cherenkov observatories, the main difference being the signal processing (calculating cross correlations between single camera pixels in pairs of telescopes). Observations of brighter stars are not limited by sky brightness, permitting efficient CTA use during also bright-Moon periods. While other concepts have been proposed to realize kilometer-scale optical interferometers of conventional amplitude (phase-) type, both in space and on the ground, their complexity places them much further into the future than CTA, which thus could become the first kilometer-scale optical imager in astronomy.Comment: Astroparticle Physics, in press; 47 pages, 10 figures, 124 reference

    High-efficiency WSi superconducting nanowire single-photon detectors for quantum state engineering in the near infrared

    Full text link
    We report on high-efficiency superconducting nanowire single-photon detectors based on amorphous WSi and optimized at 1064 nm. At an operating temperature of 1.8 K, we demonstrated a 93% system detection efficiency at this wavelength with a dark noise of a few counts per second. Combined with cavity-enhanced spontaneous parametric down-conversion, this fiber-coupled detector enabled us to generate narrowband single photons with a heralding efficiency greater than 90% and a high spectral brightness of 0.6Ă—1040.6\times10^4 photons/(sâ‹…\cdotmWâ‹…\cdotMHz). Beyond single-photon generation at large rate, such high-efficiency detectors open the path to efficient multiple-photon heralding and complex quantum state engineering

    A switchable light field camera architecture with Angle Sensitive Pixels and dictionary-based sparse coding

    Get PDF
    We propose a flexible light field camera architecture that is at the convergence of optics, sensor electronics, and applied mathematics. Through the co-design of a sensor that comprises tailored, Angle Sensitive Pixels and advanced reconstruction algorithms, we show that-contrary to light field cameras today-our system can use the same measurements captured in a single sensor image to recover either a high-resolution 2D image, a low-resolution 4D light field using fast, linear processing, or a high-resolution light field using sparsity-constrained optimization.National Science Foundation (U.S.) (NSF Grant IIS-1218411)National Science Foundation (U.S.) (NSF Grant IIS-1116452)MIT Media Lab ConsortiumNational Science Foundation (U.S.) (NSF Graduate Research Fellowship)Natural Sciences and Engineering Research Council of Canada (NSERC Postdoctoral Fellowship)Alfred P. Sloan Foundation (Research Fellowship)United States. Defense Advanced Research Projects Agency (DARPA Young Faculty Award

    Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting

    Get PDF
    We introduce tensor displays: a family of compressive light field displays comprising all architectures employing a stack of time-multiplexed, light-attenuating layers illuminated by uniform or directional backlighting (i.e., any low-resolution light field emitter). We show that the light field emitted by an N-layer, M-frame tensor display can be represented by an Nth-order, rank-M tensor. Using this representation we introduce a unified optimization framework, based on nonnegative tensor factorization (NTF), encompassing all tensor display architectures. This framework is the first to allow joint multilayer, multiframe light field decompositions, significantly reducing artifacts observed with prior multilayer-only and multiframe-only decompositions; it is also the first optimization method for designs combining multiple layers with directional backlighting. We verify the benefits and limitations of tensor displays by constructing a prototype using modified LCD panels and a custom integral imaging backlight. Our efficient, GPU-based NTF implementation enables interactive applications. Through simulations and experiments we show that tensor displays reveal practical architectures with greater depths of field, wider fields of view, and thinner form factors, compared to prior automultiscopic displays.United States. Defense Advanced Research Projects Agency (DARPA SCENICC program)National Science Foundation (U.S.) (NSF Grant IIS-1116452)United States. Defense Advanced Research Projects Agency (DARPA MOSAIC program)United States. Defense Advanced Research Projects Agency (DARPA Young Faculty Award)Alfred P. Sloan Foundation (Fellowship

    Super resolution and dynamic range enhancement of image sequences

    Get PDF
    Camera producers try to increase the spatial resolution of a camera by reducing size of sites on sensor array. However, shot noise causes the signal to noise ratio drop as sensor sites get smaller. This fact motivates resolution enhancement to be performed through software. Super resolution (SR) image reconstruction aims to combine degraded images of a scene in order to form an image which has higher resolution than all observations. There is a demand for high resolution images in biomedical imaging, surveillance, aerial/satellite imaging and high-definition TV (HDTV) technology. Although extensive research has been conducted in SR, attention has not been given to increase the resolution of images under illumination changes. In this study, a unique framework is proposed to increase the spatial resolution and dynamic range of a video sequence using Bayesian and Projection onto Convex Sets (POCS) methods. Incorporating camera response function estimation into image reconstruction allows dynamic range enhancement along with spatial resolution improvement. Photometrically varying input images complicate process of projecting observations onto common grid by violating brightness constancy. A contrast invariant feature transform is proposed in this thesis to register input images with high illumination variation. Proposed algorithm increases the repeatability rate of detected features among frames of a video. Repeatability rate is increased by computing the autocorrelation matrix using the gradients of contrast stretched input images. Presented contrast invariant feature detection improves repeatability rate of Harris corner detector around %25 on average. Joint multi-frame demosaicking and resolution enhancement is also investigated in this thesis. Color constancy constraint set is devised and incorporated into POCS framework for increasing resolution of color-filter array sampled images. Proposed method provides fewer demosaicking artifacts compared to existing POCS method and a higher visual quality in final image

    Bioresorbable silicon electronics for transient spatiotemporal mapping of electrical activity from the cerebral cortex.

    Get PDF
    Bioresorbable silicon electronics technology offers unprecedented opportunities to deploy advanced implantable monitoring systems that eliminate risks, cost and discomfort associated with surgical extraction. Applications include postoperative monitoring and transient physiologic recording after percutaneous or minimally invasive placement of vascular, cardiac, orthopaedic, neural or other devices. We present an embodiment of these materials in both passive and actively addressed arrays of bioresorbable silicon electrodes with multiplexing capabilities, which record in vivo electrophysiological signals from the cortical surface and the subgaleal space. The devices detect normal physiologic and epileptiform activity, both in acute and chronic recordings. Comparative studies show sensor performance comparable to standard clinical systems and reduced tissue reactivity relative to conventional clinical electrocorticography (ECoG) electrodes. This technology offers general applicability in neural interfaces, with additional potential utility in treatment of disorders where transient monitoring and modulation of physiologic function, implant integrity and tissue recovery or regeneration are required

    Long-baseline optical intensity interferometry: Laboratory demonstration of diffraction-limited imaging

    Full text link
    A long-held vision has been to realize diffraction-limited optical aperture synthesis over kilometer baselines. This will enable imaging of stellar surfaces and their environments, and reveal interacting gas flows in binary systems. An opportunity is now opening up with the large telescope arrays primarily erected for measuring Cherenkov light in air induced by gamma rays. With suitable software, such telescopes could be electronically connected and also used for intensity interferometry. Second-order spatial coherence of light is obtained by cross correlating intensity fluctuations measured in different pairs of telescopes. With no optical links between them, the error budget is set by the electronic time resolution of a few nanoseconds. Corresponding light-travel distances are approximately one meter, making the method practically immune to atmospheric turbulence or optical imperfections, permitting both very long baselines and observing at short optical wavelengths. Previous theoretical modeling has shown that full images should be possible to retrieve from observations with such telescope arrays. This project aims at verifying diffraction-limited imaging experimentally with groups of detached and independent optical telescopes. In a large optics laboratory, artificial stars were observed by an array of small telescopes. Using high-speed photon-counting solid-state detectors, intensity fluctuations were cross-correlated over up to 180 baselines between pairs of telescopes, producing coherence maps across the interferometric Fourier-transform plane. These measurements were used to extract parameters about the simulated stars, and to reconstruct their two-dimensional images. As far as we are aware, these are the first diffraction-limited images obtained from an optical array only linked by electronic software, with no optical connections between the telescopes.Comment: 13 pages, 9 figures, Astronomy & Astrophysics, in press. arXiv admin note: substantial text overlap with arXiv:1407.599
    • …
    corecore