38 research outputs found

    An Improved Adaptive Background Mixture Model for Real-time Tracking with Shadow Detection

    Get PDF
    Real-time segmentation of moving regions in image sequences is a fundamental step in many vision systems including automated visual surveillance, human-machine interface, and very low-bandwidth telecommunications. A typical method is background subtraction. Many background models have been introduced to deal with different problems. One of the successful solutions to these problems is to use a multi-colour background model per pixel proposed by Grimson et al [1, 2,3]. However, the method suffers from slow learning at the beginning, especially in busy environments. In addition, it can not distinguish between moving shadows and moving objects. This paper presents a method which improves this adaptive background mixture model. By reinvestigating the update equations, we utilise different equations at different phases. This allows our system learn faster and more accurately as well as adapts effectively to changing environment. A shadow detection scheme is also introduced in this paper. It is based on a computational colour space that makes use of our background model. A comparison has been made between the two algorithms. The results show the speed of learning and the accuracy of the model using our update algorithm over the Grimson et al’s tracker. When incorporate with the shadow detection, our method results in far better segmentation than The Thirteenth Conference on Uncertainty in Artificial Intelligence that of Grimson et al

    Evaluating visitor experiences with interactive art

    Get PDF
    The Music Room is an interactive installation that allows visitor to compose classical music by moving throughout a space. The distance between them and their average speed maps the emotionality of music: in particular, distance influences the pleasantness of the music, while speed influences its intensity. This paper focuses on the evaluation of visitors' experience with The Music Room by examining log-data, video footages, interviews, and questionnaires, as collected in two public exhibitions of the installation. We examined this data to the identify the factors that fostered the engagement and to understand how players appropriated the original design idea. Reconsidering our design assumptions against behavioural data, we noticed a number of unexpected behaviours, which induced us to make some considerations on design and evaluation of interactive art

    Integrated monitoring of mola mola behaviour in space and time

    Get PDF
    Over the last decade, ocean sunfish movements have been monitored worldwide using various satellite tracking methods. This study reports the near-real time monitoring of finescale (< 10 m) behaviour of sunfish. The study was conducted in southern Portugal in May 2014 and involved satellite tags and underwater and surface robotic vehicles to measure both the movements and the contextual environment of the fish. A total of four individuals were tracked using custom-made GPS satellite tags providing geolocation estimates of fine-scale resolution. These accurate positions further informed sunfish areas of restricted search (ARS), which were directly correlated to steep thermal frontal zones. Simultaneously, and for two different occasions, an Autonomous Underwater Vehicle (AUV) videorecorded the path of the tracked fish and detected buoyant particles in the water column. Importantly, the densities of these particles were also directly correlated to steep thermal gradients. Thus, both sunfish foraging behaviour (ARS) and possibly prey densities, were found to be influenced by analogous environmental conditions. In addition, the dynamic structure of the water transited by the tracked individuals was described by a Lagrangian modelling approach. The model informed the distribution of zooplankton in the region, both horizontally and in the water column, and the resultant simulated densities positively correlated with sunfish ARS behaviour estimator (r(s) = 0.184, p < 0.001). The model also revealed that tracked fish opportunistically displace with respect to subsurface current flow. Thus, we show how physical forcing and current structure provide a rationale for a predator's finescale behaviour observed over a two weeks in May 2014

    Adaptive Visual System for Tracking Low Resolution Colour Targets

    Get PDF
    This paper addresses the problem of using appearance and motion models in classifying and tracking objects when detailed information of the object’s appearance is not available. The approach relies upon motion, shape cues and colour information to help in associating objects temporally within a video stream. Unlike previous applications of colour in object tracking, where relatively large-size targets are tracked, our method is designed to track small colour targets. Our approach uses a robust background model based around Expectation Maximisation to segment moving objects with very low false detection rates. The system also incorporates a shadow detection algorithm which helps alleviate standard environmental problems associated with such approaches. A colour transformation derived from anthropological studies to model colour distributions of low-resolution targets is used along with a probabilistic method of combining colour and motion information. This provides a robust visual tracking system which is capable of performing accurately and consistently within a real world visual surveillance arena. This paper shows the system successfully tracking multiple people moving independently and the ability of the approach to recover lost tracks due to occlusions and background clutter

    An Improved Adaptive Background Mixture Model for Real-time Tracking with Shadow Detection

    No full text
    Real-time segmentation of moving regions in image sequences is a fundamental step in many vision systems including automated visual surveillance, human-machine interface, and very low-bandwidth telecommunications. A typical method is background subtraction. Many background models have been introduced to deal with different problems. One of the successful solutions to these problems is to use a multi-colour background model per pixel proposed by Grimson et al [1, 2,3]. However, the method suffers from slow learning at the beginning, especially in busy environments. In addition, it can not distinguish between moving shadows and moving objects. This paper presents a method which improves this adaptive background mixture model. By reinvestigating the update equations, we utilise different equations at different phases. This allows our system learn faster and more accurately as well as adapts effectively to changing environment. A shadow detection scheme is also introduced in this paper. It is based on a computational colour space that makes use of our background model. A comparison has been made between the two algorithms. The results show the speed of learning and the accuracy of the model using our update algorithm over the Grimson et al’s tracker. When incorporate with the shadow detection, our method results in far better segmentation than The Thirteenth Conference on Uncertainty in Artificial Intelligence that of Grimson et al

    Tracking Objects Across Uncalibrated Arbitary Topology Camera Networks

    No full text
    Intelligent visual surveillance is an important application area for computer vision. In situations where networks of hundreds of cameras are used to cover a wide area, the obvious limitation becomes the users’ ability to manage such vast amounts of information. For this reason, automated tools that can generalise about activities or track objects are important to the operator. Key to the users’ requirements is the ability to track objects across (spatially separated) camera scenes. However, extensive geometric knowledge about the site and camera position is typically required. Such an explicit mapping from camera to world is infeasible for large installations as it requires that the operator know which camera to switch to when an object disappears. To further compound the problem the installation costs of CCTV systems outweigh those of the hardware. This means that geometric constraints or any form of calibration (such as that which might be used with epipolar constraints) is simply not realistic for a real world installation. The algorithms cannot afford to dictate to the installer. This work attempts to address this problem and outlines a method to allow objects to be related and tracked across cameras without any explicit calibration, be it geometric or colour

    Jeremiah: The face of computer vision

    No full text
    This paper presents a humanoid computer interface (Jeremiah) that is capable of extracting moving objects from a video stream and responding by directing the gaze of an animated head toward it. It further responds through change of expression reflecting the emotional state of the system as a response to stimuli. As such, the system exhibits similar behavior to a child. The system was originally designed as a robust visual tracking system capable of performing accurately and consistently within a real world visual surveillance arena. As such, it provides a system capable of operating reliably in any environment both indoor and outdoor. Originally designed as a public interface to promote computer vision and the public understanding of science (exhibited in British Science Museum), Jeremiah provides the first step to a new form of intuitive computer interface. Copyright © ACM 2002
    corecore