304 research outputs found

    Illuminant retrieval for fixed location cameras

    Get PDF
    Fixed location cameras, such as panoramic cameras or surveillance cameras, are very common. In images taken with these cameras, there will be changes in lighting and dynamic image content, but there will also be constant objects in the background. We propose to solve for color constancy in this framework. We use a set of images to recover the scenes’ illuminants using only a few surfaces present in the scene. Our method retrieves the illuminant in every image by minimizing the difference between the reflectance spectra of the redundant elements’ surfaces or, more precisely, between their corresponding sensor response values. It is assumed that these spectra are constant across images taken under different illuminants. We also recover an estimate of the reflectance spectra of the selected elements. Experiments on synthetic and real images validate our method

    Daylight illuminant retrieval using redundant image elements

    Get PDF
    We present a method for retrieving illuminant spectra from a set of images taken with a fixed location camera, such as a surveillance or panoramic one. In these images, there will be significant changes in lighting conditions and scene content, but there will also be static elements in the background. As color constancy is an under-determined problem, we propose to exploit the redundancy and constancy offered by the static image elements to reduce the dimensionality of the problem. Specifically, we assume that the reflectance properties of these objects remain constant across the images taken with a given fixed camera. We demonstrate that we can retrieve illuminant and reflectance spectra in this framework by modeling the redundant image elements as a set of synthetic RGB patches. We define an error function that takes the RGB patches and a set of test illuminants as input and returns a similarity measure of the redundant surfaces reflectances. The test illuminants are then varied until the error function is minimized, returning the illuminants under which each image in the set was captured. This is achieved by gradient descent, providing an optimization method that is robust to shot noise

    Video content analysis for intelligent forensics

    Get PDF
    The networks of surveillance cameras installed in public places and private territories continuously record video data with the aim of detecting and preventing unlawful activities. This enhances the importance of video content analysis applications, either for real time (i.e. analytic) or post-event (i.e. forensic) analysis. In this thesis, the primary focus is on four key aspects of video content analysis, namely; 1. Moving object detection and recognition, 2. Correction of colours in the video frames and recognition of colours of moving objects, 3. Make and model recognition of vehicles and identification of their type, 4. Detection and recognition of text information in outdoor scenes. To address the first issue, a framework is presented in the first part of the thesis that efficiently detects and recognizes moving objects in videos. The framework targets the problem of object detection in the presence of complex background. The object detection part of the framework relies on background modelling technique and a novel post processing step where the contours of the foreground regions (i.e. moving object) are refined by the classification of edge segments as belonging either to the background or to the foreground region. Further, a novel feature descriptor is devised for the classification of moving objects into humans, vehicles and background. The proposed feature descriptor captures the texture information present in the silhouette of foreground objects. To address the second issue, a framework for the correction and recognition of true colours of objects in videos is presented with novel noise reduction, colour enhancement and colour recognition stages. The colour recognition stage makes use of temporal information to reliably recognize the true colours of moving objects in multiple frames. The proposed framework is specifically designed to perform robustly on videos that have poor quality because of surrounding illumination, camera sensor imperfection and artefacts due to high compression. In the third part of the thesis, a framework for vehicle make and model recognition and type identification is presented. As a part of this work, a novel feature representation technique for distinctive representation of vehicle images has emerged. The feature representation technique uses dense feature description and mid-level feature encoding scheme to capture the texture in the frontal view of the vehicles. The proposed method is insensitive to minor in-plane rotation and skew within the image. The capability of the proposed framework can be enhanced to any number of vehicle classes without re-training. Another important contribution of this work is the publication of a comprehensive up to date dataset of vehicle images to support future research in this domain. The problem of text detection and recognition in images is addressed in the last part of the thesis. A novel technique is proposed that exploits the colour information in the image for the identification of text regions. Apart from detection, the colour information is also used to segment characters from the words. The recognition of identified characters is performed using shape features and supervised learning. Finally, a lexicon based alignment procedure is adopted to finalize the recognition of strings present in word images. Extensive experiments have been conducted on benchmark datasets to analyse the performance of proposed algorithms. The results show that the proposed moving object detection and recognition technique superseded well-know baseline techniques. The proposed framework for the correction and recognition of object colours in video frames achieved all the aforementioned goals. The performance analysis of the vehicle make and model recognition framework on multiple datasets has shown the strength and reliability of the technique when used within various scenarios. Finally, the experimental results for the text detection and recognition framework on benchmark datasets have revealed the potential of the proposed scheme for accurate detection and recognition of text in the wild

    A Simple Approach in Digitising a Photographic Collection

    Get PDF
    This paper reviews the processes involved in the digitisation, display and storage of medium size collections of photographs using simple and inexpensive, commercially available equipment. It is also aimed to provide a guideline for evaluating the performance of such imaging devices on aspects of image quality. A collection of slides, representing first-generation analogue reproductions of a photographic collection from the nineteenth century, is treated as a case study. Constraints on the final image quality and the implications on the digital archive are discussed along with a presentation of device characterisation and calibration procedures. Summary results from objective measurements carried out to assess the systems are presented. The issues of file-format, physical storage and data migration are also addressed

    Practical and continuous luminance distribution measurements for lighting quality

    Get PDF

    Practical and continuous luminance distribution measurements for lighting quality

    Get PDF

    A case study in digitizing a photographic collection

    Get PDF
    This paper reviews the processes involved in the digitisation, display and storage of medium size collections of photographs using mid-range commercially available equipment. Guidelines for evaluating the performance of these digitisation processes based on aspects of image quality are provided. A collection of photographic slides, representing first-generation analogue reproductions of a photographic collection from the nineteenth century, is treated as a case study. Constraints on the final image quality and the implications of digital archiving are discussed. Full descriptions of device characterisation and calibration procedures are given and results from objective measurements carried out to assess the digitisation system are presented. The important issues of file format, physical storage and data migration are also addressed

    Particle Filters for Colour-Based Face Tracking Under Varying Illumination

    Get PDF
    Automatic human face tracking is the basis of robotic and active vision systems used for facial feature analysis, automatic surveillance, video conferencing, intelligent transportation, human-computer interaction and many other applications. Superior human face tracking will allow future safety surveillance systems which monitor drowsy drivers, or patients and elderly people at the risk of seizure or sudden falls and will perform with lower risk of failure in unexpected situations. This area has actively been researched in the current literature in an attempt to make automatic face trackers more stable in challenging real-world environments. To detect faces in video sequences, features like colour, texture, intensity, shape or motion is used. Among these feature colour has been the most popular, because of its insensitivity to orientation and size changes and fast process-ability. The challenge of colour-based face trackers, however, has been dealing with the instability of trackers in case of colour changes due to the drastic variation in environmental illumination. Probabilistic tracking and the employment of particle filters as powerful Bayesian stochastic estimators, on the other hand, is increasing in the visual tracking field thanks to their ability to handle multi-modal distributions in cluttered scenes. Traditional particle filters utilize transition prior as importance sampling function, but this can result in poor posterior sampling. The objective of this research is to investigate and propose stable face tracker capable of dealing with challenges like rapid and random motion of head, scale changes when people are moving closer or further from the camera, motion of multiple people with close skin tones in the vicinity of the model person, presence of clutter and occlusion of face. The main focus has been on investigating an efficient method to address the sensitivity of the colour-based trackers in case of gradual or drastic illumination variations. The particle filter is used to overcome the instability of face trackers due to nonlinear and random head motions. To increase the traditional particle filter\u27s sampling efficiency an improved version of the particle filter is introduced that considers the latest measurements. This improved particle filter employs a new colour-based bottom-up approach that leads particles to generate an effective proposal distribution. The colour-based bottom-up approach is a classification technique for fast skin colour segmentation. This method is independent to distribution shape and does not require excessive memory storage or exhaustive prior training. Finally, to address the adaptability of the colour-based face tracker to illumination changes, an original likelihood model is proposed based of spatial rank information that considers both the illumination invariant colour ordering of a face\u27s pixels in an image or video frame and the spatial interaction between them. The original contribution of this work lies in the unique mixture of existing and proposed components to improve colour-base recognition and tracking of faces in complex scenes, especially where drastic illumination changes occur. Experimental results of the final version of the proposed face tracker, which combines the methods developed, are provided in the last chapter of this manuscript
    • …
    corecore