2,522 research outputs found

    Land classification of south-central Iowa from computer enhanced images

    Get PDF
    The author has identified the following significant results. Enhanced LANDSAT imagery was most useful for land classification purposes, because these images could be photographically printed at large scales such as 1:63,360. The ability to see individual picture elements was no hindrance as long as general image patterns could be discerned. Low cost photographic processing systems for color printings have proved to be effective in the utilization of computer enhanced LANDSAT products for land classification purposes. The initial investment for this type of system was very low, ranging from 100to100 to 200 beyond a black and white photo lab. The technical expertise can be acquired from reading a color printing and processing manual

    High dynamic range imaging for archaeological recording

    No full text
    This paper notes the adoption of digital photography as a primary recording means within archaeology, and reviews some issues and problems that this presents. Particular attention is given to the problems of recording high-contrast scenes in archaeology and High Dynamic Range imaging using multiple exposures is suggested as a means of providing an archive of high-contrast scenes that can later be tone-mapped to provide a variety of visualisations. Exposure fusion is also considered, although it is noted that this has some disadvantages. Three case studies are then presented (1) a very high contrast photograph taken from within a rock-cut tomb at Cala Morell, Menorca (2) an archaeological test pitting exercise requiring rapid acquisition of photographic records in challenging circumstances and (3) legacy material consisting of three differently exposed colour positive (slide) photographs of the same scene. In each case, HDR methods are shown to significantly aid the generation of a high quality illustrative record photograph, and it is concluded that HDR imaging could serve an effective role in archaeological photographic recording, although there remain problems of archiving and distributing HDR radiance map data

    A Dual Sensor Computational Camera for High Quality Dark Videography

    Full text link
    Videos captured under low light conditions suffer from severe noise. A variety of efforts have been devoted to image/video noise suppression and made large progress. However, in extremely dark scenarios, extensive photon starvation would hamper precise noise modeling. Instead, developing an imaging system collecting more photons is a more effective way for high-quality video capture under low illuminations. In this paper, we propose to build a dual-sensor camera to additionally collect the photons in NIR wavelength, and make use of the correlation between RGB and near-infrared (NIR) spectrum to perform high-quality reconstruction from noisy dark video pairs. In hardware, we build a compact dual-sensor camera capturing RGB and NIR videos simultaneously. Computationally, we propose a dual-channel multi-frame attention network (DCMAN) utilizing spatial-temporal-spectral priors to reconstruct the low-light RGB and NIR videos. In addition, we build a high-quality paired RGB and NIR video dataset, based on which the approach can be applied to different sensors easily by training the DCMAN model with simulated noisy input following a physical-process-based CMOS noise model. Both experiments on synthetic and real videos validate the performance of this compact dual-sensor camera design and the corresponding reconstruction algorithm in dark videography

    Gradient variation: A key to enhancing photographs across illumination

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    An Integrated Enhancement Solution for 24-hour Colorful Imaging

    Full text link
    The current industry practice for 24-hour outdoor imaging is to use a silicon camera supplemented with near-infrared (NIR) illumination. This will result in color images with poor contrast at daytime and absence of chrominance at nighttime. For this dilemma, all existing solutions try to capture RGB and NIR images separately. However, they need additional hardware support and suffer from various drawbacks, including short service life, high price, specific usage scenario, etc. In this paper, we propose a novel and integrated enhancement solution that produces clear color images, whether at abundant sunlight daytime or extremely low-light nighttime. Our key idea is to separate the VIS and NIR information from mixed signals, and enhance the VIS signal adaptively with the NIR signal as assistance. To this end, we build an optical system to collect a new VIS-NIR-MIX dataset and present a physically meaningful image processing algorithm based on CNN. Extensive experiments show outstanding results, which demonstrate the effectiveness of our solution.Comment: AAAI 2020 (Oral

    Enhancing Low-Light Images Using Infrared-Encoded Images

    Full text link
    Low-light image enhancement task is essential yet challenging as it is ill-posed intrinsically. Previous arts mainly focus on the low-light images captured in the visible spectrum using pixel-wise loss, which limits the capacity of recovering the brightness, contrast, and texture details due to the small number of income photons. In this work, we propose a novel approach to increase the visibility of images captured under low-light environments by removing the in-camera infrared (IR) cut-off filter, which allows for the capture of more photons and results in improved signal-to-noise ratio due to the inclusion of information from the IR spectrum. To verify the proposed strategy, we collect a paired dataset of low-light images captured without the IR cut-off filter, with corresponding long-exposure reference images with an external filter. The experimental results on the proposed dataset demonstrate the effectiveness of the proposed method, showing better performance quantitatively and qualitatively. The dataset and code are publicly available at https://wyf0912.github.io/ELIEI/Comment: The first two authors contribute equally. The work is accepted by ICIP 202

    Programmable Spectrometry -- Per-pixel Classification of Materials using Learned Spectral Filters

    Full text link
    Many materials have distinct spectral profiles. This facilitates estimation of the material composition of a scene at each pixel by first acquiring its hyperspectral image, and subsequently filtering it using a bank of spectral profiles. This process is inherently wasteful since only a set of linear projections of the acquired measurements contribute to the classification task. We propose a novel programmable camera that is capable of producing images of a scene with an arbitrary spectral filter. We use this camera to optically implement the spectral filtering of the scene's hyperspectral image with the bank of spectral profiles needed to perform per-pixel material classification. This provides gains both in terms of acquisition speed --- since only the relevant measurements are acquired --- and in signal-to-noise ratio --- since we invariably avoid narrowband filters that are light inefficient. Given training data, we use a range of classical and modern techniques including SVMs and neural networks to identify the bank of spectral profiles that facilitate material classification. We verify the method in simulations on standard datasets as well as real data using a lab prototype of the camera
    corecore