23,718 research outputs found

    Global intensity correction in dynamic scenes

    Get PDF
    Changing image intensities causes problems for many computer vision applications operating in unconstrained environments. We propose generally applicable algorithms to correct for global differences in intensity between images recorded with a static or slowly moving camera, regardless of the cause of intensity variation. The proposed intensity correction is based on intensity quotient estimation. Various intensity estimation methods are compared. Usability is evaluated with background classification as example application. For this application we introduced the PIPE error measure evaluating performance and robustness to parameter setting. Our approach retains local intensity information, is always operational and can cope with fast changes in intensity. We show that for intensity estimation, robustness to outliers is essential for dynamic scenes. For image sequences with changing intensity, the best performing algorithm (MofQ) improves foreground-background classification results up to a factor two to four on real data

    Static Scene Statistical Non-Uniformity Correction

    Get PDF
    Non-Uniformity Correction (NUC) is required to normalize imaging detector Focal-Plane Array (FPA) outputs due to differences in the end-to-end photoelectric responses between pixels. Currently, multi-point NUC methods require static, uniform target scenes of a known intensity for calibration. Conversely, scene-based NUC methods do not require a priori knowledge of the target but the target scene must be dynamic. The new Static Scene Statistical Non-Uniformity Correction (S3NUC) algorithm was developed to address an application gap left by current NUC methods. S3NUC requires the use of two data sets of a static scene at different mean intensities but does not require a priori knowledge of the target. The S3NUC algorithm exploits the random noise in output data utilizing higher order statistical moments to extract and correct fixed pattern, systematic errors. The algorithm was tested in simulation and with measured data and the results indicate that the S3NUC algorithm is an accurate method of applying NUC. The algorithm was also able to track global array response changes over time in simulated and measured data. The results show that the variation tracking algorithm can be used to predict global changes in systems with known variation issues

    Static Scene Statistical Non-Uniformity Correction

    Get PDF
    Non-Uniformity Correction (NUC) is required to normalize imaging detector Focal-Plane Array (FPA) outputs due to differences in the end-to-end photoelectric responses between pixels. Currently, multi-point NUC methods require static, uniform target scenes of a known intensity for calibration. Conversely, scene-based NUC methods do not require a priori knowledge of the target but the target scene must be dynamic. The new Static Scene Statistical Non-Uniformity Correction (S3NUC) algorithm was developed to address an application gap left by current NUC methods. S3NUC requires the use of two data sets of a static scene at different mean intensities but does not require a priori knowledge of the target. The S3NUC algorithm exploits the random noise in output data utilizing higher order statistical moments to extract and correct fixed pattern, systematic errors. The algorithm was tested in simulation and with measured data and the results indicate that the S3NUC algorithm is an accurate method of applying NUC. The algorithm was also able to track global array response changes over time in simulated and measured data. The results show that the variation tracking algorithm can be used to predict global changes in systems with known variation issues

    Shift Estimation Algorithm for Dynamic Sensors With Frame-to-Frame Variation in Their Spectral Response

    Get PDF
    This study is motivated by the emergence of a new class of tunable infrared spectral-imaging sensors that offer the ability to dynamically vary the sensor\u27s intrinsic spectral response from frame to frame in an electronically controlled fashion. A manifestation of this is when a sequence of dissimilar spectral responses is periodically realized, whereby in every period of acquired imagery, each frame is associated with a distinct spectral band. Traditional scene-based global shift estimation algorithms are not applicable to such spectrally heterogeneous video sequences, as a pixel value may change from frame to frame as a result of both global motion and varying spectral response. In this paper, a novel algorithm is proposed and examined to fuse a series of coarse global shift estimates between periodically sampled pairs of nonadjacent frames to estimate motion between consecutive frames; each pair corresponds to two nonadjacent frames of the same spectral band. The proposed algorithm outperforms three alternative methods, with the average error being one half of that obtained by using an equal weights version of the proposed algorithm, one-fourth of that obtained by using a simple linear interpolation method, and one-twentieth of that obtained by using a naiÂżve correlation-based direct method

    Subjective and objective evaluation of local dimming algorithms for HDR images

    Get PDF

    Target recognitions in multiple camera CCTV using colour constancy

    Get PDF
    People tracking using colour feature in crowded scene through CCTV network have been a popular and at the same time a very difficult topic in computer vision. It is mainly because of the difficulty for the acquisition of intrinsic signatures of targets from a single view of the scene. Many factors, such as variable illumination conditions and viewing angles, will induce illusive modification of intrinsic signatures of targets. The objective of this paper is to verify if colour constancy (CC) approach really helps people tracking in CCTV network system. We have testified a number of CC algorithms together with various colour descriptors, to assess the efficiencies of people recognitions from real multi-camera i-LIDS data set via Receiver Operating Characteristics (ROC). It is found that when CC is applied together with some form of colour restoration mechanisms such as colour transfer, the recognition performance can be improved by at least a factor of two. An elementary luminance based CC coupled with a pixel based colour transfer algorithm, together with experimental results are reported in the present paper

    Task-demands can immediately reverse the effects of sensory-driven saliency in complex visual stimuli

    Get PDF
    In natural vision both stimulus features and task-demands affect an observer's attention. However, the relationship between sensory-driven (“bottom-up”) and task-dependent (“top-down”) factors remains controversial: Can task-demands counteract strong sensory signals fully, quickly, and irrespective of bottom-up features? To measure attention under naturalistic conditions, we recorded eye-movements in human observers, while they viewed photographs of outdoor scenes. In the first experiment, smooth modulations of contrast biased the stimuli's sensory-driven saliency towards one side. In free-viewing, observers' eye-positions were immediately biased toward the high-contrast, i.e., high-saliency, side. However, this sensory-driven bias disappeared entirely when observers searched for a bull's-eye target embedded with equal probability to either side of the stimulus. When the target always occurred in the low-contrast side, observers' eye-positions were immediately biased towards this low-saliency side, i.e., the sensory-driven bias reversed. Hence, task-demands do not only override sensory-driven saliency but also actively countermand it. In a second experiment, a 5-Hz flicker replaced the contrast gradient. Whereas the bias was less persistent in free viewing, the overriding and reversal took longer to deploy. Hence, insufficient sensory-driven saliency cannot account for the bias reversal. In a third experiment, subjects searched for a spot of locally increased contrast (“oddity”) instead of the bull's-eye (“template”). In contrast to the other conditions, a slight sensory-driven free-viewing bias prevails in this condition. In a fourth experiment, we demonstrate that at known locations template targets are detected faster than oddity targets, suggesting that the former induce a stronger top-down drive when used as search targets. Taken together, task-demands can override sensory-driven saliency in complex visual stimuli almost immediately, and the extent of overriding depends on the search target and the overridden feature, but not on the latter's free-viewing saliency

    CGIntrinsics: Better Intrinsic Image Decomposition through Physically-Based Rendering

    Full text link
    Intrinsic image decomposition is a challenging, long-standing computer vision problem for which ground truth data is very difficult to acquire. We explore the use of synthetic data for training CNN-based intrinsic image decomposition models, then applying these learned models to real-world images. To that end, we present \ICG, a new, large-scale dataset of physically-based rendered images of scenes with full ground truth decompositions. The rendering process we use is carefully designed to yield high-quality, realistic images, which we find to be crucial for this problem domain. We also propose a new end-to-end training method that learns better decompositions by leveraging \ICG, and optionally IIW and SAW, two recent datasets of sparse annotations on real-world images. Surprisingly, we find that a decomposition network trained solely on our synthetic data outperforms the state-of-the-art on both IIW and SAW, and performance improves even further when IIW and SAW data is added during training. Our work demonstrates the suprising effectiveness of carefully-rendered synthetic data for the intrinsic images task.Comment: Paper for 'CGIntrinsics: Better Intrinsic Image Decomposition through Physically-Based Rendering' published in ECCV, 201
    • …
    corecore