29,408 research outputs found

    Programmable Spectrometry -- Per-pixel Classification of Materials using Learned Spectral Filters

    Full text link
    Many materials have distinct spectral profiles. This facilitates estimation of the material composition of a scene at each pixel by first acquiring its hyperspectral image, and subsequently filtering it using a bank of spectral profiles. This process is inherently wasteful since only a set of linear projections of the acquired measurements contribute to the classification task. We propose a novel programmable camera that is capable of producing images of a scene with an arbitrary spectral filter. We use this camera to optically implement the spectral filtering of the scene's hyperspectral image with the bank of spectral profiles needed to perform per-pixel material classification. This provides gains both in terms of acquisition speed --- since only the relevant measurements are acquired --- and in signal-to-noise ratio --- since we invariably avoid narrowband filters that are light inefficient. Given training data, we use a range of classical and modern techniques including SVMs and neural networks to identify the bank of spectral profiles that facilitate material classification. We verify the method in simulations on standard datasets as well as real data using a lab prototype of the camera

    On evolution of CMOS image sensors

    Get PDF
    CMOS Image Sensors have become the principal technology in majority of digital cameras. They started replacing the film and Charge Coupled Devices in the last decade with the promise of lower cost, lower power requirement, higher integration and the potential of focal plane processing. However, the principal factor behind their success has been the ability to utilise the shrinkage in CMOS technology to make smaller pixels, and thereby have more resolution without increasing the cost. With the market of image sensors exploding courtesy their inte- gration with communication and computation devices, technology developers improved the CMOS processes to have better optical performance. Nevertheless, the promises of focal plane processing as well as on-chip integration have not been fulfilled. The market is still being pushed by the desire of having higher number of pixels and better image quality, however, differentiation is being difficult for any image sensor manufacturer. In the paper, we will explore potential disruptive growth directions for CMOS Image sensors and ways to achieve the same

    Micro Fourier Transform Profilometry (μ\muFTP): 3D shape measurement at 10,000 frames per second

    Full text link
    Recent advances in imaging sensors and digital light projection technology have facilitated a rapid progress in 3D optical sensing, enabling 3D surfaces of complex-shaped objects to be captured with improved resolution and accuracy. However, due to the large number of projection patterns required for phase recovery and disambiguation, the maximum fame rates of current 3D shape measurement techniques are still limited to the range of hundreds of frames per second (fps). Here, we demonstrate a new 3D dynamic imaging technique, Micro Fourier Transform Profilometry (μ\muFTP), which can capture 3D surfaces of transient events at up to 10,000 fps based on our newly developed high-speed fringe projection system. Compared with existing techniques, μ\muFTP has the prominent advantage of recovering an accurate, unambiguous, and dense 3D point cloud with only two projected patterns. Furthermore, the phase information is encoded within a single high-frequency fringe image, thereby allowing motion-artifact-free reconstruction of transient events with temporal resolution of 50 microseconds. To show μ\muFTP's broad utility, we use it to reconstruct 3D videos of 4 transient scenes: vibrating cantilevers, rotating fan blades, bullet fired from a toy gun, and balloon's explosion triggered by a flying dart, which were previously difficult or even unable to be captured with conventional approaches.Comment: This manuscript was originally submitted on 30th January 1

    A two-band approach to nλ\lambda phase error corrections with LBTI's PHASECam

    Full text link
    PHASECam is the Large Binocular Telescope Interferometer's (LBTI) phase sensor, a near-infrared camera which is used to measure tip/tilt and phase variations between the two AO-corrected apertures of the Large Binocular Telescope (LBT). Tip/tilt and phase sensing are currently performed in the H (1.65 μ\mum) and K (2.2 μ\mum) bands at 1 kHz, and the K band phase telemetry is used to send tip/tilt and Optical Path Difference (OPD) corrections to the system. However, phase variations outside the range [-π\pi, π\pi] are not sensed, and thus are not fully corrected during closed-loop operation. PHASECam's phase unwrapping algorithm, which attempts to mitigate this issue, still occasionally fails in the case of fast, large phase variations. This can cause a fringe jump, in which case the unwrapped phase will be incorrect by a wavelength or more. This can currently be manually corrected by the observer, but this is inefficient. A more reliable and automated solution is desired, especially as the LBTI begins to commission further modes which require robust, active phase control, including controlled multi-axial (Fizeau) interferometry and dual-aperture non-redundant aperture masking interferometry. We present a multi-wavelength method of fringe jump capture and correction which involves direct comparison between the K band and currently unused H band phase telemetry.Comment: 17 pages, 10 figure

    CMOS Architectures and circuits for high-speed decision-making from image flows

    Get PDF
    We present architectures, CMOS circuits and CMOS chips to process image flows at very high speed. This is achieved by exploiting bio-inspiration and performing processing tasks in parallel manner and concurrently with image acquisition. A vision system is presented which makes decisions within sub-msec range. This is very well suited for defense and security applications requiring segmentation and tracking of rapidly moving objects

    Major Mergers Host the Most Luminous Red Quasars at z ~ 2: A Hubble Space Telescope WFC3/IR Study

    Full text link
    We used the Hubble Space Telescope WFC3 near-infrared camera to image the host galaxies of a sample of eleven luminous, dust-reddened quasars at z ~ 2 -- the peak epoch of black hole growth and star formation in the Universe -- to test the merger-driven picture for the co-evolution of galaxies and their nuclear black holes. The red quasars come from the FIRST+2MASS red quasar survey and a newer, deeper, UKIDSS+FIRST sample. These dust-reddened quasars are the most intrinsically luminous quasars in the Universe at all redshifts, and may represent the dust-clearing transitional phase in the merger-driven black hole growth scenario. Probing the host galaxies in rest-frame visible light, the HST images reveal that 8/10 of these quasars have actively merging hosts, while one source is reddened by an intervening lower redshift galaxy along the line-of-sight. We study the morphological properties of the quasar hosts using parametric Sersic fits as well as the non-parametric estimators (Gini coefficient, M_{20} and asymmetry). Their properties are heterogeneous but broadly consistent with the most extreme morphologies of local merging systems such as Ultraluminous Infrared galaxies. The red quasars have a luminosity range of log(L_bol) = 47.8 - 48.3 (erg/s) and the merger fraction of their AGN hosts is consistent with merger-driven models of luminous AGN activity at z=2, which supports the picture in which luminous quasars and galaxies co-evolve through major mergers that trigger both star formation and black hole growth.Comment: Submitted to ApJ. This version includes the response to the referee repor
    corecore