4,726 research outputs found

    An exploration of feature detector performance in the thermal-infrared modality

    Get PDF
    Thermal-infrared images have superior statistical properties compared with visible-spectrum images in many low-light or no-light scenarios. However, a detailed understanding of feature detector performance in the thermal modality lags behind that of the visible modality. To address this, the first comprehensive study on feature detector performance on thermal-infrared images is conducted. A dataset is presented which explores a total of ten different environments with a range of statistical properties. An investigation is conducted into the effects of several digital and physical image transformations on detector repeatability in these environments. The effect of non-uniformity noise, unique to the thermal modality, is analyzed. The accumulation of sensor non-uniformities beyond the minimum possible level was found to have only a small negative effect. A limiting of feature counts was found to improve the repeatability performance of several detectors. Most other image transformations had predictable effects on feature stability. The best-performing detector varied considerably depending on the nature of the scene and the test

    Color-decoupled photo response non-uniformity for digital image forensics

    Get PDF
    The last few years have seen the use of photo response non-uniformity noise (PRNU), a unique fingerprint of imaging sensors, in various digital forensic applications such as source device identification, content integrity verification and authentication. However, the use of a colour filter array for capturing only one of the three colour components per pixel introduces colour interpolation noise, while the existing methods for extracting PRNU provide no effective means for addressing this issue. Because the artificial colours obtained through the colour interpolation process is not directly acquired from the scene by physical hardware, we expect that the PRNU extracted from the physical components, which are free from interpolation noise, should be more reliable than that from the artificial channels, which carry interpolation noise. Based on this assumption we propose a Couple-Decoupled PRNU (CD-PRNU) extraction method, which first decomposes each colour channel into 4 sub-images and then extracts the PRNU noise from each sub-image. The PRNU noise patterns of the sub-images are then assembled to get the CD-PRNU. This new method can prevent the interpolation noise from propagating into the physical components, thus improving the accuracy of device identification and image content integrity verification

    Sensor Non Uniformity Correction Algorithms and its Real Time Implementation for Infrared Focal Plane Array-based Thermal Imaging System

    Get PDF
    The advancement in infrared (IR) detector technologies from 1st to 3rd generation and beyond has resulted in the improvement of infrared imaging systems due to availability of IR detectors with large number of pixels, smaller pitch, higher sensitivity and large F-number. However, it also results in several problems and most serious of them is sensor non-uniformities, which are mainly attributed to the difference in the photo-response of each detector in the infrared focal plane array. These spatial and temporal non-uniformities result in a slowly varying pattern on the image usually called as fixed pattern noise and results in the degradation the temperature resolving capabilities of thermal imaging system considerably. This paper describes two types of non uniformity correction methodologies.  First type of algorithms deals with correction of sensor non-uniformities based upon the calibration method. Second type of algorithm deals with correction of sensor non uniformities using scene information present in the acquired images. The proposed algorithms correct both additive and multiplicative non uniformities. These algorithms are evaluated using the simulated & actual infrared data and results of implementations are presented. Furthermore, proposed algorithms are implemented in field programmable gate array based embedded hardware.Defence Science Journal, 2013, 63(6), pp.589-598, DOI:http://dx.doi.org/10.14429/dsj.63.576

    Calibrating the elements of a multispectral imaging system

    Get PDF
    We describe a method to calibrate the elements of a multispectral system aimed at skylight imaging, which consists of a monochrome charge-coupled device (CCD) camera and a liquid-crystal tunable filter (LCTF). We demonstrate how to calibrate these two devices in order to build a multispectral camera that can obtain spectroradiometric measurements of skylight. Spectral characterizations of the tunable filter and the camera are presented together with a complete study into correcting temporal and spatial noise, which is of key importance in CCDs. We describe all the necessary steps to undertake this work and all the additional instrumentation that must be used to calibrate the radiometric devices correctly. We show how this complete study of our multispectral system allows us to use it as an accurate, high resolution spectroradiometer.This work was financed by the Spanish Red Temática “CIENCIA Y TECNOLOGÍA DEL COLOR” (FIS2005-25312-E), the Spanish Ministry of Education and Science, and the European Fund for Regional Development (FEDER) through grant FIS2007-60736. We thank our English colleague A. L. Tate for revising our English text.Peer reviewe

    Simultaneous temperature estimation and nonuniformity correction from multiple frames

    Full text link
    Infrared (IR) cameras are widely used for temperature measurements in various applications, including agriculture, medicine, and security. Low-cost IR camera have an immense potential to replace expansive radiometric cameras in these applications, however low-cost microbolometer-based IR cameras are prone to spatially-variant nonuniformity and to drift in temperature measurements, which limits their usability in practical scenarios. To address these limitations, we propose a novel approach for simultaneous temperature estimation and nonuniformity correction from multiple frames captured by low-cost microbolometer-based IR cameras. We leverage the physical image acquisition model of the camera and incorporate it into a deep learning architecture called kernel estimation networks (KPN), which enables us to combine multiple frames despite imperfect registration between them. We also propose a novel offset block that incorporates the ambient temperature into the model and enables us to estimate the offset of the camera, which is a key factor in temperature estimation. Our findings demonstrate that the number of frames has a significant impact on the accuracy of temperature estimation and nonuniformity correction. Moreover, our approach achieves a significant improvement in performance compared to vanilla KPN, thanks to the offset block. The method was tested on real data collected by a low-cost IR camera mounted on a UAV, showing only a small average error of 0.27C0.54C0.27^\circ C-0.54^\circ C relative to costly scientific-grade radiometric cameras. Our method provides an accurate and efficient solution for simultaneous temperature estimation and nonuniformity correction, which has important implications for a wide range of practical applications

    Study and simulation results for video landmark acquisition and tracking technology (Vilat-2)

    Get PDF
    The results of several investigations and hardware developments which supported new technology for Earth feature recognition and classification are described. Data analysis techniques and procedures were developed for processing the Feature Identification and Location Experiment (FILE) data. This experiment was flown in November 1981, on the second Shuttle flight and a second instrument, designed for aircraft flights, was flown over the United States in 1981. Ground tests were performed to provide the basis for designing a more advanced version (four spectral bands) of the FILE which would be capable of classifying clouds and snow (and possibly ice) as distinct features, in addition to the features classified in the Shuttle experiment (two spectral bands). The Shuttle instrument classifies water, bare land, vegetation, and clouds/snow/ice (grouped)

    Band to Band Calibration and Relative Gain Analysis of Satellite Sensors Using Deep Convective Clouds

    Get PDF
    Two calibration techniques were developed in this research. First, a calibration technique, in which calibration was transferred to cirrus band and coastal aerosol band from well calibrated reflective bands of Landsat 8 using SCIAMACHY Deep Convective Cloud (DCC) spectra. Second, a novel method to derive relative gains using DCCs and improve the image quality of cirrus band scenes was developed. DCCs are very cold, bright clouds located in the tropopause layer. At small sun elevation and sensor viewing angles, they act as near Lambertian solar reflectors. They have very high signal to noise ratio and can easily be detected using simple IR threshold. Thus, DCCs are an ideal calibration target. Cirrus band in Landsat 8 has band center at 1375nm. Due to high water vapor absorption at this wavelength it is difficult to calibrate the cirrus band using other standard vicarious calibration methods. Similarly, the coastal aerosol band has short wavelength (443nm). At this wavelength maximum scattering can be observed in the atmosphere, due to which it is difficult to calibrate this band. Thus DCCs are investigated to calibrate these two channels. DCC spectra measured by the SCIAMACHY hyperspectral sensor were used to transfer calibration. The gain estimates after band to band calibration using DCC for the coastal aerosol band was 0.986 ±0.0031 and that for cirrus band was 0.982±0.0398. The primarily target was to estimate gains with uncertainty of less than 5%. The results are within required precision levels and the primarily goal of the research was successfully accomplished. The non-uniformity in detector response can cause visible streaks in the image. To remove these visible streaks, modified histogram equalization method was used in the second algorithm. A large number of DCC scenes were binned and relative gains were derived. Results were validated qualitatively by visual analysis and quantitatively by the streaking metric. The streaking metric was below 0.2 for most of the detector which was the required goal. Visible streaks were removed by applying DCC derived gains and in most of the cases DCC gains outperforms the default gains

    Camera Spatial Frequency Response Derived from Pictorial Natural Scenes

    Get PDF
    Camera system performance is a prominent part of many aspects of imaging science and computer vision. There are many aspects to camera performance that determines how accurately the image represents the scene, including measurements of colour accuracy, tone reproduction, geometric distortions, and image noise evaluation. The research conducted in this thesis focuses on the Modulation Transfer Function (MTF), a widely used camera performance measurement employed to describe resolution and sharpness. Traditionally measured under controlled conditions with characterised test charts, the MTF is a measurement restricted to laboratory settings. The MTF is based on linear system theory, meaning the input to output must follow a straightforward correlation. Established methods for measuring the camera system MTF include the ISO12233:2017 for measuring the edge-based Spatial Frequency Response (e-SFR), a sister measure of the MTF designed for measuring discrete systems. Many modern camera systems incorporate non-linear, highly adaptive image signal processing (ISP) to improve image quality. As a result, system performance becomes scene and processing dependant, adapting to the scene contents captured by the camera. Established test chart based MTF/SFR methods do not describe this adaptive nature; they only provide the response of the camera to a test chart signal. Further, with the increased use of Deep Neural Networks (DNN) for image recognition tasks and autonomous vision systems, there is an increased need for monitoring system performance outside laboratory conditions in real-time, i.e. live-MTF. Such measurements would assist in monitoring the camera systems to ensure they are fully operational for decision critical tasks. This thesis presents research conducted to develop a novel automated methodology that estimates the standard e-SFR directly from pictorial natural scenes. This methodology has the potential to produce scene dependant and real-time camera system performance measurements, opening new possibilities in imaging science and allowing live monitoring/calibration of systems for autonomous computer vision applications. The proposed methodology incorporates many well-established image processes, as well as others developed for specific purposes. It is presented in two parts. Firstly, the Natural Scene derived SFR (NS-SFR) are obtained from isolated captured scene step-edges, after verifying that these edges have the correct profile for implementing into the slanted-edge algorithm. The resulting NS-SFRs are shown to be a function of both camera system performance and scene contents. The second part of the methodology uses a series of derived NS-SFRs to estimate the system e-SFR, as per the ISO12233 standard. This is achieved by applying a sequence of thresholds to segment the most likely data corresponding to the system performance. These thresholds a) group the expected optical performance variation across the imaging circle within radial distance segments, b) obtain the highest performance NS-SFRs per segment and c) select the NS-SFRs with input edge and region of interest (ROI) parameter ranges shown to introduce minimal e-SFR variation. The selected NS-SFRs are averaged per radial segment to estimate system e-SFRs across the field of view. A weighted average of these estimates provides an overall system performance estimation. This methodology is implemented for e-SFR estimation of three characterised camera systems, two near-linear and one highly non-linear. Investigations are conducted using large, diverse image datasets as well as restricting scene content and the number of images used for the estimation. The resulting estimates are comparable to ISO12233 e-SFRs derived from test chart inputs for the near-linear systems. Overall estimate stays within one standard deviation of the equivalent test chart measurement. Results from the highly non-linear system indicate scene and processing dependency, potentially leading to a more representative SFR measure than the current chart-based approaches for such systems. These results suggest that the proposed method is a viable alternative to the ISO technique
    corecore