17 research outputs found

    Simultaneous temperature estimation and nonuniformity correction from multiple frames

    Full text link
    Infrared (IR) cameras are widely used for temperature measurements in various applications, including agriculture, medicine, and security. Low-cost IR camera have an immense potential to replace expansive radiometric cameras in these applications, however low-cost microbolometer-based IR cameras are prone to spatially-variant nonuniformity and to drift in temperature measurements, which limits their usability in practical scenarios. To address these limitations, we propose a novel approach for simultaneous temperature estimation and nonuniformity correction from multiple frames captured by low-cost microbolometer-based IR cameras. We leverage the physical image acquisition model of the camera and incorporate it into a deep learning architecture called kernel estimation networks (KPN), which enables us to combine multiple frames despite imperfect registration between them. We also propose a novel offset block that incorporates the ambient temperature into the model and enables us to estimate the offset of the camera, which is a key factor in temperature estimation. Our findings demonstrate that the number of frames has a significant impact on the accuracy of temperature estimation and nonuniformity correction. Moreover, our approach achieves a significant improvement in performance compared to vanilla KPN, thanks to the offset block. The method was tested on real data collected by a low-cost IR camera mounted on a UAV, showing only a small average error of 0.27∘C−0.54∘C0.27^\circ C-0.54^\circ C relative to costly scientific-grade radiometric cameras. Our method provides an accurate and efficient solution for simultaneous temperature estimation and nonuniformity correction, which has important implications for a wide range of practical applications

    Image Sensor Nonuniformity Correction by a Scene-Based Maximum Likelihood Approach

    Get PDF
    Image sensors come with a spatial inhomogeneity, known as Fixed Pattern Noise or image sensor nonuniformity, which degrades the image quality. These nonuniformities are regarded as the systematic errors of the image sensor, however, they change with the sensor temperature and with time. This makes laboratory calibrations unsatisfying. Scene based nonuniformity correction methods are therefore necessary to correct for these sensor errors. In this thesis, a new maximum likelihood estimation method is developed that estimates a sensor’s nonuniformities from a given set of input images. The method follows a rigorous mathematical derivation that exploits the available sensor statistics and uses only well-motivated assumptions. While previous methods need to optimize a free parameter, the new method’s parameters are defined by the statistics of the input data. Furthermore, the new method reaches a better performance than the previous methods. Specialized developments that include a row- or column-wise and a combined estimation of the nonuniformity parameters are introduced as well and are of relevance for typical industrial applications. Finally it is shown that the previous methods can be regarded as simplifications of the newly developed method. This deliberation gives a new view onto the problem of scene based nonuniformity estimation and allows to select the best method for a given application

    Dark energy survey year 1 results: the photometric data set for cosmology

    Get PDF
    FINEP - FINANCIADORA DE ESTUDOS E PROJETOSFAPERJ - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DO RIO DE JANEIROCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICOMCTIC - MINISTÉRIO DA CIÊNCIA, TECNOLOGIA, INOVAÇÕES E COMUNICAÇÕESWe describe the creation, content, and validation of the Dark Energy Survey (DES) internal year-one cosmology data set, Y1A1 GOLD, in support of upcoming cosmological analyses. The Y1A1 GOLD data set is assembled from multiple epochs of DES imaging and consists of calibrated photometric zero-points, object catalogs, and ancillary data products-e.g., maps of survey depth and observing conditions, star galaxy classification, and photometric redshift estimates that are necessary for accurate cosmological analyses. The Y1A1 GOLD wide area object catalog consists of similar to 137 million objects detected in co-added images covering similar to 1800 deg(2) in the DES grizY filters. The 10 sigma limiting magnitude for galaxies is g = 23.4, r = 23.2, i = 22.5, z = 21.8, and Y = 20.1. Photometric calibration of Y1A1 GOLD was performed by combining nightly zero-point solutions with stellar locus regression, and the absolute calibration accuracy is better than 2% over the survey area. DES Y1A1 GOLD is the largest photometric data set at the achieved depth to date, enabling precise measurements of cosmic acceleration at z less than or similar to 1.2352135FINEP - FINANCIADORA DE ESTUDOS E PROJETOSFAPERJ - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DO RIO DE JANEIROCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICOMCTIC - MINISTÉRIO DA CIÊNCIA, TECNOLOGIA, INOVAÇÕES E COMUNICAÇÕESFINEP - FINANCIADORA DE ESTUDOS E PROJETOSFAPERJ - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DO RIO DE JANEIROCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICOMCTIC - MINISTÉRIO DA CIÊNCIA, TECNOLOGIA, INOVAÇÕES E COMUNICAÇÕESSem informaçãoSem informaçãoSem informaçãoSem informaçãoAgĂȘncias de fomento estrangeiras apoiaram essa pesquisa, mais informaçÔes acesse artig

    Dark energy survey year 1 results: the photometric data set for cosmology

    Get PDF
    We describe the creation, content, and validation of the Dark Energy Survey (DES) internal year-one cosmology data set, Y1A1 GOLD, in support of upcoming cosmological analyses. The Y1A1 GOLD data set is assembled from multiple epochs of DES imaging and consists of calibrated photometric zero-points, object catalogs, and ancillary data products—e.g., maps of survey depth and observing conditions, star–galaxy classification, and photometric redshift estimates—that are necessary for accurate cosmological analyses. The Y1A1 GOLD wide-area object catalog consists of ~137 million objects detected in co-added images covering ~1800deg^2 in the DES grizY filters. The 10σ limiting magnitude for galaxies is g=23.4, r=23.2, i=22.5, z=21.8, and Y=20.1. Photometric calibration of Y1A1 GOLD was performed by combining nightly zero-point solutions with stellar locus regression, and the absolute calibration accuracy is better than 2% over the survey area. DES Y1A1 GOLD is the largest photometric data set at the achieved depth to date, enabling precise measurements of cosmic acceleration at z < 1

    Quanta Burst Photography

    Full text link
    Single-photon avalanche diodes (SPADs) are an emerging sensor technology capable of detecting individual incident photons, and capturing their time-of-arrival with high timing precision. While these sensors were limited to single-pixel or low-resolution devices in the past, recently, large (up to 1 MPixel) SPAD arrays have been developed. These single-photon cameras (SPCs) are capable of capturing high-speed sequences of binary single-photon images with no read noise. We present quanta burst photography, a computational photography technique that leverages SPCs as passive imaging devices for photography in challenging conditions, including ultra low-light and fast motion. Inspired by recent success of conventional burst photography, we design algorithms that align and merge binary sequences captured by SPCs into intensity images with minimal motion blur and artifacts, high signal-to-noise ratio (SNR), and high dynamic range. We theoretically analyze the SNR and dynamic range of quanta burst photography, and identify the imaging regimes where it provides significant benefits. We demonstrate, via a recently developed SPAD array, that the proposed method is able to generate high-quality images for scenes with challenging lighting, complex geometries, high dynamic range and moving objects. With the ongoing development of SPAD arrays, we envision quanta burst photography finding applications in both consumer and scientific photography.Comment: A version with better-quality images can be found on the project webpage: http://wisionlab.cs.wisc.edu/project/quanta-burst-photography

    Investigating the effects of healthy cognitive aging on brain functional connectivity using 4.7 T resting-state functional Magnetic Resonance Imaging

    Get PDF
    Functional changes in the aging human brain have been previously reported using functional magnetic resonance imaging (fMRI). Earlier resting-state fMRI studies revealed an age-associated weakening of intra-system functional connectivity (FC) and age-associated strengthening of inter-system FC. However, the majority of such FC studies did not investigate the relationship between age and network amplitude, without which correlation-based measures of FC can be challenging to interpret. Consequently, the main aim of this study was to investigate how three primary measures of resting-state fMRI signal—network amplitude, network topography, and inter-network FC—are affected by healthy cognitive aging. We acquired resting-state fMRI data on a 4.7 T scanner for 105 healthy participants representing the entire adult lifespan (18–85 years of age). To study age differences in network structure, we combined ICA-based network decomposition with sparse graphical models. Older adults displayed lower blood-oxygen-level-dependent (BOLD) signal amplitude in all functional systems, with sensorimotor networks showing the largest age differences. Our age comparisons of network topography and inter-network FC demonstrated a substantial amount of age invariance in the brain’s functional architecture. Despite architecture similarities, old adults displayed a loss of communication efficiency in our inter-network FC comparisons, driven primarily by the FC reduction in frontal and parietal association cortices. Together, our results provide a comprehensive overview of age effects on fMRI-based FC

    Adaptive Nonlocal Signal Restoration and Enhancement Techniques for High-Dimensional Data

    Get PDF
    The large number of practical applications involving digital images has motivated a significant interest towards restoration solutions that improve the visual quality of the data under the presence of various acquisition and compression artifacts. Digital images are the results of an acquisition process based on the measurement of a physical quantity of interest incident upon an imaging sensor over a specified period of time. The quantity of interest depends on the targeted imaging application. Common imaging sensors measure the number of photons impinging over a dense grid of photodetectors in order to produce an image similar to what is perceived by the human visual system. Different applications focus on the part of the electromagnetic spectrum not visible by the human visual system, and thus require different sensing technologies to form the image. In all cases, even with the advance of technology, raw data is invariably affected by a variety of inherent and external disturbing factors, such as the stochastic nature of the measurement processes or challenging sensing conditions, which may cause, e.g., noise, blur, geometrical distortion and color aberration. In this thesis we introduce two filtering frameworks for video and volumetric data restoration based on the BM3D grouping and collaborative filtering paradigm. In its general form, the BM3D paradigm leverages the correlation present within a nonlocal emph{group} composed of mutually similar basic filtering elements, e.g., patches, to attain an enhanced sparse representation of the group in a suitable transform domain where the energy of the meaningful part of the signal can be thus separated from that of the noise through coefficient shrinkage. We argue that the success of this approach largely depends on the form of the used basic filtering elements, which in turn define the subsequent spectral representation of the nonlocal group. Thus, the main contribution of this thesis consists in tailoring specific basic filtering elements to the the inherent characteristics of the processed data at hand. Specifically, we embed the local spatial correlation present in volumetric data through 3-D cubes, and the local spatial and temporal correlation present in videos through 3-D spatiotemporal volumes, i.e. sequences of 2-D blocks following a motion trajectory. The foundational aspect of this work is the analysis of the particular spectral representation of these elements. Specifically, our frameworks stack mutually similar 3-D patches along an additional fourth dimension, thus forming a 4-D data structure. By doing so, an effective group spectral description can be formed, as the phenomena acting along different dimensions in the data can be precisely localized along different spectral hyperplanes, and thus different filtering shrinkage strategies can be applied to different spectral coefficients to achieve the desired filtering results. This constitutes a decisive difference with the shrinkage traditionally employed in BM3D-algorithms, where different hyperplanes of the group spectrum are shrunk subject to the same degradation model. Different image processing problems rely on different observation models and typically require specific algorithms to filter the corrupted data. As a consequent contribution of this thesis, we show that our high-dimensional filtering model allows to target heterogeneous noise models, e.g., characterized by spatial and temporal correlation, signal-dependent distributions, spatially varying statistics, and non-white power spectral densities, without essential modifications to the algorithm structure. As a result, we develop state-of-the-art methods for a variety of fundamental image processing problems, such as denoising, deblocking, enhancement, deflickering, and reconstruction, which also find practical applications in consumer, medical, and thermal imaging

    Applications and Experiences of Quality Control

    Get PDF
    The rich palette of topics set out in this book provides a sufficiently broad overview of the developments in the field of quality control. By providing detailed information on various aspects of quality control, this book can serve as a basis for starting interdisciplinary cooperation, which has increasingly become an integral part of scientific and applied research
    corecore