49,801 research outputs found

    Video-rate computational super-resolution and integral imaging at longwave-infrared wavelengths

    Get PDF
    We report the first computational super-resolved, multi-camera integral imaging at long-wave infrared (LWIR) wavelengths. A synchronized array of FLIR Lepton cameras was assembled, and computational super-resolution and integral-imaging reconstruction employed to generate video with light-field imaging capabilities, such as 3D imaging and recognition of partially obscured objects, while also providing a four-fold increase in effective pixel count. This approach to high-resolution imaging enables a fundamental reduction in the track length and volume of an imaging system, while also enabling use of low-cost lens materials.Comment: Supplementary multimedia material in http://dx.doi.org/10.6084/m9.figshare.530302

    Aperture-scanning Fourier ptychography for 3D refocusing and super-resolution macroscopic imaging

    Get PDF
    We report an imaging scheme, termed aperture-scanning Fourier ptychography, for 3D refocusing and super-resolution macroscopic imaging. The reported scheme scans an aperture at the Fourier plane of an optical system and acquires the corresponding intensity images of the object. The acquired images are then synthesized in the frequency domain to recover a high-resolution complex sample wavefront; no phase information is needed in the recovery process. We demonstrate two applications of the reported scheme. In the first example, we use an aperture-scanning Fourier ptychography platform to recover the complex hologram of extended objects. The recovered hologram is then digitally propagated into different planes along the optical axis to examine the 3D structure of the object. We also demonstrate a reconstruction resolution better than the detector pixel limit (i.e., pixel super-resolution). In the second example, we develop a camera-scanning Fourier ptychography platform for super-resolution macroscopic imaging. By simply scanning the camera over different positions, we bypass the diffraction limit of the photographic lens and recover a super-resolution image of an object placed at the far field. This platform’s maximum achievable resolution is ultimately determined by the camera’s traveling range, not the aperture size of the lens. The FP scheme reported in this work may find applications in 3D object tracking, synthetic aperture imaging, remote sensing, and optical/electron/X-ray microscopy

    Random Lens Imaging

    Get PDF
    We call a random lens one for which the function relating the input light ray to the output sensor location is pseudo-random. Imaging systems with random lensescan expand the space of possible camera designs, allowing new trade-offs in optical design and potentially adding new imaging capabilities. Machine learningmethods are critical for both camera calibration and image reconstruction from the sensor data. We develop the theory and compare two different methods for calibration and reconstruction: an MAP approach, and basis pursuit from compressive sensing. We show proof-of-concept experimental results from a random lens made from a multi-faceted mirror, showing successful calibration and image reconstruction. We illustrate the potential for super-resolution and 3D imaging

    Light field super resolution through controlled micro-shifts of light field sensor

    Get PDF
    Light field cameras enable new capabilities, such as post-capture refocusing and aperture control, through capturing directional and spatial distribution of light rays in space. Micro-lens array based light field camera design is often preferred due to its light transmission efficiency, cost-effectiveness and compactness. One drawback of the micro-lens array based light field cameras is low spatial resolution due to the fact that a single sensor is shared to capture both spatial and angular information. To address the low spatial resolution issue, we present a light field imaging approach, where multiple light fields are captured and fused to improve the spatial resolution. For each capture, the light field sensor is shifted by a pre-determined fraction of a micro-lens size using an XY translation stage for optimal performance

    Photographic graininess reduction by super-imposition

    Full text link
    Thesis (M.A.)--Boston UniversityA method of reducing the graininess of a photographic print and increasing resolution in low contrast regions is described. The method involves the printing of more than one negative frame to produce one print. This requires a series of negatives with identical detail coverage in the area to be printed. The success of the method depends largely on the precision of the solution of the registration problem. Each negative is printed in turn, using the normal exposure, partitioned in as many parts as there are negatives to be printed. Each negative must be registered as exactly as possible in the image area. Four different aerial emulsions were used to obtain the 35-mm negatives for the superimposition printing technique. Kodak films used were: Tri X-RP Aercon, Super XX-RP Aerial Recon, Plus X Aerocon (SO 1166), and SO-1213. The exposure versus resolution characteristics and the basic sensitometric curves were developed for these films prior to exposure of the final series of negative frames. The negatives were exposed under identical conditions with the exception of lens openings and shutter speeds at an object to image ratio of 160 to 1. The camera was a Contax IIA with a 50-nm F/2 Sonnar lens. The camera exposure settings were: Tri-X, F/16, 1/250 second; Plus X, F/16, 1/100 second; SO-1213, F/11, 1/50 second. Due to the level of brightness of the target, the camera lens was not used at its best aperture. No filter was used. [TRUNCATED]

    Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks

    Get PDF
    Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement
    • …
    corecore