14,595 research outputs found

    Semantic web technologies for video surveillance metadata

    Get PDF
    Video surveillance systems are growing in size and complexity. Such systems typically consist of integrated modules of different vendors to cope with the increasing demands on network and storage capacity, intelligent video analytics, picture quality, and enhanced visual interfaces. Within a surveillance system, relevant information (like technical details on the video sequences, or analysis results of the monitored environment) is described using metadata standards. However, different modules typically use different standards, resulting in metadata interoperability problems. In this paper, we introduce the application of Semantic Web Technologies to overcome such problems. We present a semantic, layered metadata model and integrate it within a video surveillance system. Besides dealing with the metadata interoperability problem, the advantages of using Semantic Web Technologies and the inherent rule support are shown. A practical use case scenario is presented to illustrate the benefits of our novel approach

    Search Tracker: Human-derived object tracking in-the-wild through large-scale search and retrieval

    Full text link
    Humans use context and scene knowledge to easily localize moving objects in conditions of complex illumination changes, scene clutter and occlusions. In this paper, we present a method to leverage human knowledge in the form of annotated video libraries in a novel search and retrieval based setting to track objects in unseen video sequences. For every video sequence, a document that represents motion information is generated. Documents of the unseen video are queried against the library at multiple scales to find videos with similar motion characteristics. This provides us with coarse localization of objects in the unseen video. We further adapt these retrieved object locations to the new video using an efficient warping scheme. The proposed method is validated on in-the-wild video surveillance datasets where we outperform state-of-the-art appearance-based trackers. We also introduce a new challenging dataset with complex object appearance changes.Comment: Under review with the IEEE Transactions on Circuits and Systems for Video Technolog

    Towards automated visual surveillance using gait for identity recognition and tracking across multiple non-intersecting cameras

    No full text
    Despite the fact that personal privacy has become a major concern, surveillance technology is now becoming ubiquitous in modern society. This is mainly due to the increasing number of crimes as well as the essential necessity to provide secure and safer environment. Recent research studies have confirmed now the possibility of recognizing people by the way they walk i.e. gait. The aim of this research study is to investigate the use of gait for people detection as well as identification across different cameras. We present a new approach for people tracking and identification between different non-intersecting un-calibrated stationary cameras based on gait analysis. A vision-based markerless extraction method is being deployed for the derivation of gait kinematics as well as anthropometric measurements in order to produce a gait signature. The novelty of our approach is motivated by the recent research in biometrics and forensic analysis using gait. The experimental results affirmed the robustness of our approach to successfully detect walking people as well as its potency to extract gait features for different camera viewpoints achieving an identity recognition rate of 73.6 % processed for 2270 video sequences. Furthermore, experimental results confirmed the potential of the proposed method for identity tracking in real surveillance systems to recognize walking individuals across different views with an average recognition rate of 92.5 % for cross-camera matching for two different non-overlapping views.<br/

    Adaptive object segmentation and tracking

    Get PDF
    Efficient tracking of deformable objects moving with variable velocities is an important current research problem. In this thesis a robust tracking model is proposed for the automatic detection, recognition and tracking of target objects which are subject to variable orientations and velocities and are viewed under variable ambient lighting conditions. The tracking model can be applied to efficiently track fast moving vehicles and other objects in various complex scenarios. The tracking model is evaluated on both colour visible band and infra-red band video sequences acquired from the air by the Sussex police helicopter and other collaborators. The observations made validate the improved performance of the model over existing methods. The thesis is divided in three major sections. The first section details the development of an enhanced active contour for object segmentation. The second section describes an implementation of a global active contour orientation model. The third section describes the tracking model and assesses it performance on the aerial video sequences. In the first part of the thesis an enhanced active contour snake model using the difference of Gaussian (DoG) filter is reported and discussed in detail. An acquisition method based on the enhanced active contour method developed that can assist the proposed tracking system is tested. The active contour model is further enhanced by the use of a disambiguation framework designed to assist multiple object segmentation which is used to demonstrate that the enhanced active contour model can be used for robust multiple object segmentation and tracking. The active contour model developed not only facilitates the efficient update of the tracking filter but also decreases the latency involved in tracking targets in real-time. As far as computational effort is concerned, the active contour model presented improves the computational cost by 85% compared to existing active contour models. The second part of the thesis introduces the global active contour orientation (GACO) technique for statistical measurement of contoured object orientation. It is an overall object orientation measurement method which uses the proposed active contour model along with statistical measurement techniques. The use of the GACO technique, incorporating the active contour model, to measure object orientation angle is discussed in detail. A real-time door surveillance application based on the GACO technique is developed and evaluated on the i-LIDS door surveillance dataset provided by the UK Home Office. The performance results demonstrate the use of GACO to evaluate the door surveillance dataset gives a success rate of 92%. Finally, a combined approach involving the proposed active contour model and an optimal trade-off maximum average correlation height (OT-MACH) filter for tracking is presented. The implementation of methods for controlling the area of support of the OT-MACH filter is discussed in detail. The proposed active contour method as the area of support for the OT-MACH filter is shown to significantly improve the performance of the OT-MACH filter's ability to track vehicles moving within highly cluttered visible and infra-red band video sequence

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    Calibration of audio-video sensors for multi-modal event indexing

    Full text link
    This paper addresses the coordinated use of video and audio cues to capture and index surveillance events with multimodal labels. The focus of this paper is the development of a joint-sensor calibration technique that uses audio-visual observations to improve the calibration process. One significant feature of this approach is the ability to continuously check and update the calibration status of the sensor suite, making it resilient to independent drift in the individual sensors. We present scenarios in which this system is used to enhance surveillance
    • …
    corecore