26 research outputs found

    Global intensity correction in dynamic scenes

    Get PDF
    Changing image intensities causes problems for many computer vision applications operating in unconstrained environments. We propose generally applicable algorithms to correct for global differences in intensity between images recorded with a static or slowly moving camera, regardless of the cause of intensity variation. The proposed intensity correction is based on intensity quotient estimation. Various intensity estimation methods are compared. Usability is evaluated with background classification as example application. For this application we introduced the PIPE error measure evaluating performance and robustness to parameter setting. Our approach retains local intensity information, is always operational and can cope with fast changes in intensity. We show that for intensity estimation, robustness to outliers is essential for dynamic scenes. For image sequences with changing intensity, the best performing algorithm (MofQ) improves foreground-background classification results up to a factor two to four on real data

    CCD Color Camera Characterization for Image Measurements

    No full text
    In this article, we will analyze a range of different types of cameras for its use in measurements. We verify a general model of a charged coupled device camera using experiments. This model includes gain and offset, additive and multiplicative noise, and gamma correction. It is shown that for several cameras, except a typical consumer webcam, the general model holds. The associated model parameters are estimated. It is shown that for most cameras the model can be simplified under normal operating conditions by neglecting the dark current. We further show that the amount of additive noise is exceeded by the amount of multiplicative noise at intensity values larger than 10%–30% of the intensity range

    Probabilistic Classification between Foreground Objects and Background

    No full text
    Tracking of deformable objects like humans is a basic operation in many surveillance applications. Objects are detected as they enter the field of view of the camera and they are then tracked during the time they are visible. A problem with tracking deformable objects is that the shape of the object should be re-estimated for each frame. We propose a probabilistic framework combining object detection, tracking and shape deformation. We make use of the probabilities that a pixel belongs to the background, a new object or any of the known objects. Instead of using arbitrary thresholds for deciding to which class the pixel should be assigned we assign the pixel based on the Bayes criterion. Preliminary experiments show the classification error drops to about half the error of traditional approaches
    corecore