377 research outputs found

    Moving object detection unaffected by cast shadows, highlights and ghosts

    Get PDF
    IEEE Copyright Policies: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.This paper describes a new approach to perform segmentation of moving objects in real-time from images acquired by a fixed color video camera and is the first tool of a major project that aspires to recognize abnormal human behavior in public areas. The moving objects detection is based on background subtraction and it is unaffected by changes in illumination, i.e., cast shadows and highlights. Furthermore it does not require a special attention during the initialization process, due to its ability to detect and rectify ghosts. The results show that with image resolutions of 380x280 at 24 bits per pixel, the time spent in the segmentation process is around 80ms, in a 32 bits 3GHz processor based computer.Fundação para a Ciência e a Tecnologia (FCT

    The OBSERVER: an intelligent and automated video surveillance system

    Get PDF
    Comunicação apresentada na ICIAR, 3, Póvoa de Varzim, Portugal, 2006.In this work we present a new approach to learn, detect and predict unusual and abnormal behaviors of people, groups and vehicles in real-time. The proposed OBSERVER video surveillance system acquires images from a stationary color video camera and applies state-of-the-art algorithms to segment and track moving objects. The segmentation is based in a background subtraction algorithm with cast shadows, highlights and ghost’s detection and removal. To robustly track objects in the scene, a technique based on appearance models was used. The OBSERVER is capable of identifying three types of behaviors (normal, unusual and abnormal actions). This achievement was possible due to the novel N-ary tree classifier, which was successfully tested on synthetic data.Fundação para a Ciência e a Tecnologia (FCT)

    A Comprehensive Review of Vehicle Detection Techniques Under Varying Moving Cast Shadow Conditions Using Computer Vision and Deep Learning

    Get PDF
    Design of a vision-based traffic analytic system for urban traffic video scenes has a great potential in context of Intelligent Transportation System (ITS). It offers useful traffic-related insights at much lower costs compared to their conventional sensor based counterparts. However, it remains a challenging problem till today due to the complexity factors such as camera hardware constraints, camera movement, object occlusion, object speed, object resolution, traffic flow density, and lighting conditions etc. ITS has many applications including and not just limited to queue estimation, speed detection and different anomalies detection etc. All of these applications are primarily dependent on sensing vehicle presence to form some basis for analysis. Moving cast shadows of vehicles is one of the major problems that affects the vehicle detection as it can cause detection and tracking inaccuracies. Therefore, it is exceedingly important to distinguish dynamic objects from their moving cast shadows for accurate vehicle detection and recognition. This paper provides an in-depth comparative analysis of different traffic paradigm-focused conventional and state-of-the-art shadow detection and removal algorithms. Till date, there has been only one survey which highlights the shadow removal methodologies particularly for traffic paradigm. In this paper, a total of 70 research papers containing results of urban traffic scenes have been shortlisted from the last three decades to give a comprehensive overview of the work done in this area. The study reveals that the preferable way to make a comparative evaluation is to use the existing Highway I, II, and III datasets which are frequently used for qualitative or quantitative analysis of shadow detection or removal algorithms. Furthermore, the paper not only provides cues to solve moving cast shadow problems, but also suggests that even after the advent of Convolutional Neural Networks (CNN)-based vehicle detection methods, the problems caused by moving cast shadows persists. Therefore, this paper proposes a hybrid approach which uses a combination of conventional and state-of-the-art techniques as a pre-processing step for shadow detection and removal before using CNN for vehicles detection. The results indicate a significant improvement in vehicle detection accuracies after using the proposed approach

    Cast shadow modelling and detection

    Get PDF
    Computer vision applications are often confronted by the need to differentiate between objects and their shadows. A number of shadow detection algorithms have been proposed in literature, based on physical, geometrical, and other heuristic techniques. While most of these existing approaches are dependent on the scene environments and object types, the ones that are not, are classified as superior to others conceptually and in terms of accuracy. Despite these efforts, the design of a generic, accurate, simple, and efficient shadow detection algorithm still remains an open problem. In this thesis, based on a physically-derived hypothesis for shadow identification, novel, multi-domain shadow detection algorithms are proposed and tested in the spatial and transform domains. A novel "Affine Shadow Test Hypothesis" has been proposed, derived, and validated across multiple environments. Based on that, several new shadow detection algorithms have been proposed and modelled for short-duration video sequences, where a background frame is available as a reliable reference, and for long duration video sequences, where the use of a dedicated background frame is unreliable. Finally, additional algorithms have been proposed to detect shadows in still images, where the use of a separate background frame is not possible. In this approach, the author shows that the proposed algorithms are capable of detecting cast, and self shadows simultaneously. All proposed algorithms have been modelled, and tested to detect shadows in the spatial (pixel) and transform (frequency) domains and are compared against state-of-art approaches, using popular test and novel videos, covering a wide range of test conditions. It is shown that the proposed algorithms outperform most existing methods and effectively detect different types of shadows under various lighting and environmental conditions

    Image-based Material Editing

    Get PDF
    Photo editing software allows digital images to be blurred, warped or re-colored at the touch of a button. However, it is not currently possible to change the material appearance of an object except by painstakingly painting over the appropriate pixels. Here we present a set of methods for automatically replacing one material with another, completely different material, starting with only a single high dynamic range image, and an alpha matte specifying the object. Our approach exploits the fact that human vision is surprisingly tolerant of certain (sometimes enormous) physical inaccuracies. Thus, it may be possible to produce a visually compelling illusion of material transformations, without fully reconstructing the lighting or geometry. We employ a range of algorithms depending on the target material. First, an approximate depth map is derived from the image intensities using bilateral filters. The resulting surface normals are then used to map data onto the surface of the object to specify its material appearance. To create transparent or translucent materials, the mapped data are derived from the object\u27s background. To create textured materials, the mapped data are a texture map. The surface normals can also be used to apply arbitrary bidirectional reflectance distribution functions to the surface, allowing us to simulate a wide range of materials. To facilitate the process of material editing, we generate the HDR image with a novel algorithm, that is robust against noise in individual exposures. This ensures that any noise, which would possibly have affected the shape recovery of the objects adversely, will be removed. We also present an algorithm to automatically generate alpha mattes. This algorithm requires as input two images--one where the object is in focus, and one where the background is in focus--and then automatically produces an approximate matte, indicating which pixels belong to the object. The result is then improved by a second algorithm to generate an accurate alpha matte, which can be given as input to our material editing techniques

    Prediction of abnormal behaviors for intelligent video surveillance systems

    Get PDF
    IEEE Copyright Policies This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.The OBSERVER is a video surveillance system that detects and predicts abnormal behaviors aiming at the intelligent surveillance concept. The system acquires color images from a stationary video camera and applies state of the art algorithms to segment, track and classify moving objects. In this paper we present the behavior analysis module of the system. A novel method, called Dynamic Oriented Graph (DOG) is used to detect and predict abnormal behaviors, using real-time unsupervised learning. The DOG method characterizes observed actions by means of a structure of unidirectional connected nodes, each one defining a region in the hyperspace of attributes measured from the observed moving objects and having assigned a probability to generate an abnormal behavior. An experimental evaluation with synthetic data was held, where the DOG method outperforms the previously used N-ary Trees classifier.Fundação para a Ciência e a Tecnologia (FCT) - SFRH/BD/17259/2004

    Vision-Based 2D and 3D Human Activity Recognition

    Get PDF

    Seeking the Self in Pigment and Pixels: Postmodernism, Art, and the Subject

    Get PDF
    In this study, I examine how works of art become vehicles for the postmodern inquiry into the nature of subjectivity. My thesis narrows the focus to those characters who attempt to ground themselves in works of art, especially representational paintings. I argue that, to cope with what they see as the chaos of a decentered postmodern world, these figures try to anchor their confused identities in what they wrongfully interpret as stable and mimetic artworks. Nostalgic for an imagined past when representation was transparent and corresponded to reality, they believe that traditional figurative art offers the promise of cohesive meaning otherwise lacking under postmodernism. Their views of art, therefore, underwrite a desire and nostalgia for absolutes that are non-existent. In their failure to ground themselves in images, we see the fundamental instability of both the subject and of art. The wayward individuals that I examine yearn for art objects to come to life in order to confirm their own selfhood. What they seek, then, is to transform art-objects into art-subjects; this Pygmalionesque project is grounded in the futile hope that the art-object can reciprocate their desires. We find literary examples of this trend in the characters I analyze in my first two chapters: notably the narrator(s) of John Banville’s Frames Trilogy and the gay spies of the fictionalized Cambridge Five. In my final chapter, I look to the clones and androids of popular culture and explore the real life example of Japanese love-doll owners. In each of these instances, artworks are strategically positioned as sites of ontological anchorage, but this foundation can never be secure under postmodernism. Despite their fervent hopes, these characters have misplaced their trust in a form of representation that is no more stable than any other aspect of the postmodern condition. I argue that Freddie, Victor, Tommy, and Tavo, among others, are particularly good examples of the vexed relationship between the image and the self

    Multi-sensor human action recognition with particular application to tennis event-based indexing

    Get PDF
    The ability to automatically classify human actions and activities using vi- sual sensors or by analysing body worn sensor data has been an active re- search area for many years. Only recently with advancements in both fields and the ubiquitous nature of low cost sensors in our everyday lives has auto- matic human action recognition become a reality. While traditional sports coaching systems rely on manual indexing of events from a single modality, such as visual or inertial sensors, this thesis investigates the possibility of cap- turing and automatically indexing events from multimodal sensor streams. In this work, we detail a novel approach to infer human actions by fusing multimodal sensors to improve recognition accuracy. State of the art visual action recognition approaches are also investigated. Firstly we apply these action recognition detectors to basic human actions in a non-sporting con- text. We then perform action recognition to infer tennis events in a tennis court instrumented with cameras and inertial sensing infrastructure. The system proposed in this thesis can use either visual or inertial sensors to au- tomatically recognise the main tennis events during play. A complete event retrieval system is also presented to allow coaches to build advanced queries, which existing sports coaching solutions cannot facilitate, without an inordi- nate amount of manual indexing. The event retrieval interface is evaluated against a leading commercial sports coaching tool in terms of both usability and efficiency
    corecore