10 research outputs found

    An approach for Shadow Detection and Removal based on Multiple Light Sources

    Get PDF
    Shadows in images are essential but sometimes unwanted as they can decline the result of computer vision algorithms. A shadow is obtained by the interaction of light with objects in an image surface. Shadows may letdown the image analysis processes and also cause a poor quality of information which in turn leads to problems in execution of algorithms. In this paper, a method has been proposed to detect and remove the shadows where multiple sources of light is been estimated, as we can take an example of playground stadium where multiple floodlights are fixed, multiple shadows can be observed originating from each of the targets. To successfully track individual target, it is essential to achieve an accurate image of the foreground. Also, an effort has been done to list some of the very crucial techniques related to shadow detection and removal. Many times, the shadow of the background information is merged with the foreground object and makes the process more complex. DOI: 10.17762/ijritcc2321-8169.150517

    Physics-based extraction of intrinsic images from a single image

    Full text link
    A technique for extracting intrinsic images, including the reflectance and illumination images, from a single color image is presented. The technique first convolves the input image with a prescribed set of derivative filters. The pixels of filtered images are then classified into reflectance-related or illumination-related based on a set of chromatic characteristics of pixels calculated from the input image. Chromatic characteristics of pixels are defined by a photometric reflectance model based on the Kubelka-Munk color theory. From the classification results of the filtered images, the intrinsic images of the input image can be computed. Real images have been utilized in our experiments. The results have indicated that the proposed technique can effectively extract the intrinsic images from a single image. 1

    Tracking people across disjoint camera views by an illumination-tolerant appearance representation

    Full text link
    Tracking single individuals as they move across disjoint camera views is a challenging task since their appearance may vary significantly between views. Major changes in appearance are due to different and varying illumination conditions and the deformable geometry of people. These effects are hard to estimate and take into account in real-life applications. Thus, in this paper we propose an illumination-tolerant appearance representation, which is capable of coping with the typical illumination changes occurring in surveillance scenarios. The appearance representation is based on an online k-means colour clustering algorithm, a data-adaptive intensity transformation and the incremental use of frames. A similarity measurement is also introduced to compare the appearance representations of any two arbitrary individuals. Post-matching integration of the matching decision along the individuals' tracks is performed in order to improve reliability and robustness of matching. Once matching is provided for any two views of a single individual, its tracking across disjoint cameras derives straightforwardly. Experimental results presented in this paper from a real surveillance camera network show the effectiveness of the proposed method. © Springer-Verlag 2007

    Time-Lapse Photometric Stereo and Applications

    Full text link
    International audienceThis paper presents a technique to recover geometry from time-lapse sequences of outdoor scenes. We build upon photometric stereo techniques to recover approximate shadowing, shading and normal components allowing us to alter the material and normals of the scene. Previous work in analyzing such images has faced two fundamental difficulties: 1. the illumination in outdoor images consists of time-varying sunlight and skylight, and 2. the motion of the sun is restricted to a near-planar arc through the sky, making surface normal recovery unstable. We develop methods to estimate the reflection component due to skylight illumination. We also show that sunlight directions are usually non-planar, thus making surface normal recovery possible. This allows us to estimate approximate surface normals for outdoor scenes using a single day of data. We demonstrate the use of these surface normals for a number of image editing applications including reflectance, lighting, and normal editing

    Virtual image sensors to track human activity in a smart house

    Get PDF
    With the advancement of computer technology, demand for more accurate and intelligent monitoring systems has also risen. The use of computer vision and video analysis range from industrial inspection to surveillance. Object detection and segmentation are the first and fundamental task in the analysis of dynamic scenes. Traditionally, this detection and segmentation are typically done through temporal differencing or statistical modelling methods. One of the most widely used background modeling and segmentation algorithms is the Mixture of Gaussians method developed by Stauffer and Grimson (1999). During the past decade many such algorithms have been developed ranging from parametric to non-parametric algorithms. Many of them utilise pixel intensities to model the background, but some use texture properties such as Local Binary Patterns. These algorithms function quite well under normal environmental conditions and each has its own set of advantages and short comings. However, there are two drawbacks in common. The first is that of the stationary object problem; when moving objects become stationary, they get merged into the background. The second problem is that of light changes; when rapid illumination changes occur in the environment, these background modelling algorithms produce large areas of false positives.These algorithms are capable of adapting to the change, however, the quality of the segmentation is very poor during the adaptation phase. In this thesis, a framework to suppress these false positives is introduced. Image properties such as edges and textures are utilised to reduce the amount of false positives during adaptation phase. The framework is built on the idea of sequential pattern recognition. In any background modelling algorithm, the importance of multiple image features as well as different spatial scales cannot be overlooked. Failure to focus attention on these two factors will result in difficulty to detect and reduce false alarms caused by rapid light change and other conditions. The use of edge features in false alarm suppression is also explored. Edges are somewhat more resistant to environmental changes in video scenes. The assumption here is that regardless of environmental changes, such as that of illumination change, the edges of the objects should remain the same. The edge based approach is tested on several videos containing rapid light changes and shows promising results. Texture is then used to analyse video images and remove false alarm regions. Texture gradient approach and Laws Texture Energy Measures are used to find and remove false positives. It is found that Laws Texture Energy Measure performs better than the gradient approach. The results of using edges, texture and different combination of the two in false positive suppression are also presented in this work. This false positive suppression framework is applied to a smart house senario that uses cameras to model ”virtual sensors” to detect interactions of occupants with devices. Results show the accuracy of virtual sensors compared with the ground truth is improved

    Illumination normalization with time-dependent intrinsic images for video surveillance

    No full text
    Cast shadows produce troublesome effects for video surveillance systems, typically for object tracking from a fixed viewpoint, since it yields appearance variations of objects depending on whether they are inside or outside the shadow. To robustly eliminate these shadows from image sequences as a preprocessing stage for robust video surveillance, we propose a framework based on the idea of intrinsic images. Unlike previous methods for deriving intrinsic images, we derive time-varying reflectance images and corresponding illumination images from a sequence of images. Using obtained illumination images, we normalize the input image sequence in terms of incident lighting distribution to eliminate shadow effects. We also propose an illumination normalization scheme which can potentially run in real time, utilizing the illumination eigenspace, which captures the illumination variation due to weather, time of day etc., and a shadow interpolation method based on shadow hulls. This paper describes the theory of the framework with simulation results, and shows its effectiveness with object tracking results on real scene data sets for traffic monitoring.

    Learning continuous models for estimating intrinsic component images

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Also issued in pages.MIT Rotch Library copy: issued in pages.Includes bibliographical references (leaves 137-144).The goal of computer vision is to use an image to recover the characteristics of a scene, such as its shape or illumination. This is difficult because an image is the mixture of multiple characteristics. For example, an edge in an image could be caused by either an edge on a surface or a change in the surface's color. Distinguishing the effects of different scene characteristics is an important step towards high-level analysis of an image. This thesis describes how to use machine learning to build a system that recovers different characteristics of the scene from a single, gray-scale image of the scene. The goal of the system is to use the observed image to recover images, referred to as Intrinsic Component Images, that represent the scene's characteristics. The development of the system is focused on estimating two important characteristics of a scene, its shading and reflectance, from a single image. From the observed image, the system estimates a shading image, which captures the interaction of the illumination and shape of the scene pictured, and an albedo image, which represents how the surfaces in the image reflect light. Measured both qualitatively and quantitatively, this system produces state-of-the-art estimates of shading and albedo images.(cont.) This system is also flexible enough to be used for the separate problem of removing noise from an image. Building this system requires algorithms for continuous regression and learning the parameters of a Conditionally Gaussian Markov Random Field. Unlike previous work, this system is trained using real-world surfaces with ground-truth shading and albedo images. The learning algorithms are designed to accommodate the large amount of data in this training set.by Marshall Friend Tappen.Ph.D

    Previsão e identificação de eventos de quebra de segurança em vídeo-vigilância

    Get PDF
    Tese de doutoramento em Tecnologias e Sistemas de Informação (ramo de conhecimento em Sistemas de Computação e Comunicação)Esta tese tem como propósito a detecção e previsão de comportamentos passíveis de originar uma quebra de segurança. Estes são reconhecidos por meio da observação de padrões de actividade humana, extraídos de sequências de imagens digitalizadas, adquiridas por intermédio de uma câmara de vídeo a cores, monocular e fixa. A aferição dos comportamentos é suportada pela informação obtida através da detecção, classificação e seguimento de objectos em movimento, minimizando a utilização de informação de contexto na cena observada e sem recurso a descrições de comportamentos previamente definidos. De modo a atingir este objectivo, foram desenvolvidas técnicas de processamento e análise de imagem, associadas a métodos baseados em inteligência artificial para a modelação de padrões de comportamento. A segmentação de objectos em movimento foi assente numa abordagem de subtracção por plano de fundo adaptativo, com a capacidade de detecção de regiões da imagem afectadas por sombras e brilhos. Criou-se ainda um processo de remoção de fantasmas, i.e. falsas detecções observadas sempre que um objecto, pertencente ao plano de fundo, inicia um movimento de deslocação que o leva a abandonar o espaço anteriormente ocupado. O seguimento de objectos foi assegurado por uma técnica que recorre a Modelos de Aparência, e que possibilita o seguimento de objectos deformáveis, mostrando-se eficaz em situações de oclusão, fusão e separação de objectos. Para a detecção e previsão automática de comportamentos desenvolveram-se dois classificadores (N-ary Trees e Dynamic Oriented Graph) que, utilizando os dados provenientes das funções de processamento e análise de imagem, permitem modelar sequências temporais. O sistema final, constituído pela junção dos múltiplos componentes propostos e implementado numa câmara de vídeo inteligente, foi testado com um conjunto de dados sintéticos, sendo posteriormente avaliado em ambiente real de vídeo-vigilância. Pela análise dos resultados experimentais, verificou-se que o sistema proposto permite realizar de forma eficaz a previsão de comportamentos de quebra de segurança.This thesis has the purpose of detection and forecasting of behaviours susceptible to originate security breaks. These behaviours are recognized by means of human activity pattern observation, extracted from digital image sequences, acquired by a video colour camera, monocular and static. The assessment of the behaviours is supported by the information acquired through the detection, classification and tracking of moving objects, when minimizing the use of context information from the observed scene and without descriptions of previously defined behaviours. In order to reach this goal, image processing and analysis techniques had been developed and associated with artificial intelligence methods for the behaviour pattern modelling. The segmentation of moving objects was based on an adaptive background subtraction approach capable of detecting regions of the image affected by shadows and highlights. A ghost’s removal process was also developed, i.e. observed false detections whenever one object, pertaining to the background, initiates a movement that takes it to abandon the previously occupied space. The tracking of objects was assured by a technique that applies Appearance Models, which makes possible the tracking of deformable objects and reveals efficiency in situations of occlusion, fusion and splitting of objects. For the detection and automatic forecasting of behaviours, two classifiers (N-ary Trees and Dynamic Oriented Graph) were proposed. Both use preceding data from the processing and image analysis functions and they allow the modelling of temporal sequences. The overall system, built from the junction of the components developed and implemented in an intelligent video camera, was tested with a synthetic dataset, being later evaluated in real environment of video-monitoring. The analysis of the experimental results has shown that the proposed system allows an efficient prediction of security break behaviours.Fundação para a Ciência e a Tecnologia (FCT) - referência SFRH / BD / 17259 / 200

    Shadow segmentation and tracking in real-world conditions

    Get PDF
    Visual information, in the form of images and video, comes from the interaction of light with objects. Illumination is a fundamental element of visual information. Detecting and interpreting illumination effects is part of our everyday life visual experience. Shading for instance allows us to perceive the three-dimensional nature of objects. Shadows are particularly salient cues for inferring depth information. However, we do not make any conscious or unconscious effort to avoid them as if they were an obstacle when we walk around. Moreover, when humans are asked to describe a picture, they generally omit the presence of illumination effects, such as shadows, shading, and highlights, to give a list of objects and their relative position in the scene. Processing visual information in a way that is close to what the human visual system does, thus being aware of illumination effects, represents a challenging task for computer vision systems. Illumination phenomena interfere in fact with fundamental tasks in image analysis and interpretation applications, such as object extraction and description. On the other hand, illumination conditions are an important element to be considered when creating new and richer visual content that combines objects from different sources, both natural and synthetic. When taken into account, illumination effects can play an important role in achieving realism. Among illumination effects, shadows are often integral part of natural scenes and one of the elements contributing to naturalness of synthetic scenes. In this thesis, the problem of extracting shadows from digital images is discussed. A new analysis method for the segmentation of cast shadows in still and moving images without the need of human supervision is proposed. The problem of separating moving cast shadows from moving objects in image sequences is particularly relevant for an always wider range of applications, ranging from video analysis to video coding, and from video manipulation to interactive environments. Therefore, particular attention has been dedicated to the segmentation of shadows in video. The validity of the proposed approach is however also demonstrated through its application to the detection of cast shadows in still color images. Shadows are a difficult phenomenon to model. Their appearance changes with changes in the appearance of the surface they are cast upon. It is therefore important to exploit multiple constraints derived from the analysis of the spectral, geometric and temporal properties of shadows to develop effective techniques for their extraction. The proposed method combines an analysis of color information and of photometric invariant features to a spatio-temporal verification process. With regards to the use of color information for shadow analysis, a complete picture of the existing solutions is provided, which points out the fundamental assumptions, the adopted color models and the link with research problems such as computational color constancy and color invariance. The proposed spatial verification does not make any assumption about scene geometry nor about object shape. The temporal analysis is based on a novel shadow tracking technique. On the basis of the tracking results, a temporal reliability estimation of shadows is proposed which allows to discard shadows which do not present time coherence. The proposed approach is general and can be applied to a wide class of applications and input data. The proposed cast shadow segmentation method has been evaluated on a number of different video data representing indoor and outdoor real-world environments. The obtained results have confirmed the validity of the approach, in particular its ability to deal with different types of content and its robustness to different physically important independent variables, and have demonstrated the improvement with respect to the state of the art. Examples of application of the proposed shadow segmentation tool to the enhancement of video object segmentation, tracking and description operations, and to video composition, have demonstrated the advantages of a shadow-aware video processing
    corecore