2,142 research outputs found

    Target recognitions in multiple camera CCTV using colour constancy

    Get PDF
    People tracking using colour feature in crowded scene through CCTV network have been a popular and at the same time a very difficult topic in computer vision. It is mainly because of the difficulty for the acquisition of intrinsic signatures of targets from a single view of the scene. Many factors, such as variable illumination conditions and viewing angles, will induce illusive modification of intrinsic signatures of targets. The objective of this paper is to verify if colour constancy (CC) approach really helps people tracking in CCTV network system. We have testified a number of CC algorithms together with various colour descriptors, to assess the efficiencies of people recognitions from real multi-camera i-LIDS data set via Receiver Operating Characteristics (ROC). It is found that when CC is applied together with some form of colour restoration mechanisms such as colour transfer, the recognition performance can be improved by at least a factor of two. An elementary luminance based CC coupled with a pixel based colour transfer algorithm, together with experimental results are reported in the present paper

    Color Constancy Using CNNs

    Full text link
    In this work we describe a Convolutional Neural Network (CNN) to accurately predict the scene illumination. Taking image patches as input, the CNN works in the spatial domain without using hand-crafted features that are employed by most previous methods. The network consists of one convolutional layer with max pooling, one fully connected layer and three output nodes. Within the network structure, feature learning and regression are integrated into one optimization process, which leads to a more effective model for estimating scene illumination. This approach achieves state-of-the-art performance on a standard dataset of RAW images. Preliminary experiments on images with spatially varying illumination demonstrate the stability of the local illuminant estimation ability of our CNN.Comment: Accepted at DeepVision: Deep Learning in Computer Vision 2015 (CVPR 2015 workshop

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Colour alignment for relative colour constancy via non-standard references

    Full text link
    Relative colour constancy is an essential requirement for many scientific imaging applications. However, most digital cameras differ in their image formations and native sensor output is usually inaccessible, e.g., in smartphone camera applications. This makes it hard to achieve consistent colour assessment across a range of devices, and that undermines the performance of computer vision algorithms. To resolve this issue, we propose a colour alignment model that considers the camera image formation as a black-box and formulates colour alignment as a three-step process: camera response calibration, response linearisation, and colour matching. The proposed model works with non-standard colour references, i.e., colour patches without knowing the true colour values, by utilising a novel balance-of-linear-distances feature. It is equivalent to determining the camera parameters through an unsupervised process. It also works with a minimum number of corresponding colour patches across the images to be colour aligned to deliver the applicable processing. Two challenging image datasets collected by multiple cameras under various illumination and exposure conditions were used to evaluate the model. Performance benchmarks demonstrated that our model achieved superior performance compared to other popular and state-of-the-art methods.Comment: 14 pages, 8 figures, 2 tables, accepted by IEEE Transactions on Image Processin
    • …
    corecore