27,027 research outputs found

    Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks

    Get PDF
    Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement

    Surface heat transfer due to sliding bubble motion

    Get PDF
    International audienc

    Fourier domain optical coherence tomography system with balance detection

    Get PDF
    A Fourier domain optical coherence tomography system with two spectrometers in balance detection is assembled using each an InGaAs linear camera. Conditions and adjustments of spectrometer parameters are presented to ensure anti-phase channeled spectrum modulation across the two cameras for a majority of wavelengths within the optical source spectrum. By blocking the signal to one of the spectrometers, the setup was used to compare the conditions of operation of a single camera with that of a balanced configuration. Using multiple layer samples, balanced detection technique is compared with techniques applied to conventional single camera setups, based on sequential deduction of averaged spectra collected with different on/off settings for the sample or reference beams. In terms of reducing the autocorrelation terms and fixed pattern noise, it is concluded that balance detection performs better than single camera techniques, is more tolerant to movement, exhibits longer term stability and can operate dynamically in real time. The cameras used exhibit larger saturation power than the power threshold where excess photon noise exceeds shot noise. Therefore, conditions to adjust the two cameras to reduce the noise when used in a balanced configuration are presented. It is shown that balance detection can reduce the noise in real time operation, in comparison with single camera configurations. However, simple deduction of an average spectrum in single camera configurations delivers less noise than the balance detection

    Reflectance Transformation Imaging (RTI) System for Ancient Documentary Artefacts

    No full text
    This tutorial summarises our uses of reflectance transformation imaging in archaeological contexts. It introduces the UK AHRC funded project reflectance Transformation Imaging for Anciant Documentary Artefacts and demonstrates imaging methodologies

    Dynamic programming for multi-view disparity/depth estimation

    Get PDF

    Low-level processing for real-time image analysis

    Get PDF
    A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given
    corecore