27,826 research outputs found

    282300 - Video Surveillance

    Get PDF

    Alignment of velocity fields for video surveillance

    Get PDF
    Velocity fields play an important role in surveillance since they describe typical motion behaviors of video objects (e.g., pedestrians) in the scene. This paper presents an algorithm for the alignment of velocity fields acquired by different cameras, at different time intervals, from different viewpoints. Velocity fields are aligned using a warping function which maps corresponding points and vectors in both fields. The warping parameters are estimated by minimizing a non-linear least squares energy. Experimental tests show that the proposed model is able to compensate significant misalignments, including translation, rotation and scaling

    Semantic web technologies for video surveillance metadata

    Get PDF
    Video surveillance systems are growing in size and complexity. Such systems typically consist of integrated modules of different vendors to cope with the increasing demands on network and storage capacity, intelligent video analytics, picture quality, and enhanced visual interfaces. Within a surveillance system, relevant information (like technical details on the video sequences, or analysis results of the monitored environment) is described using metadata standards. However, different modules typically use different standards, resulting in metadata interoperability problems. In this paper, we introduce the application of Semantic Web Technologies to overcome such problems. We present a semantic, layered metadata model and integrate it within a video surveillance system. Besides dealing with the metadata interoperability problem, the advantages of using Semantic Web Technologies and the inherent rule support are shown. A practical use case scenario is presented to illustrate the benefits of our novel approach

    Silhouette coverage analysis for multi-modal video surveillance

    Get PDF
    In order to improve the accuracy in video-based object detection, the proposed multi-modal video surveillance system takes advantage of the different kinds of information represented by visual, thermal and/or depth imaging sensors. The multi-modal object detector of the system can be split up in two consecutive parts: the registration and the coverage analysis. The multi-modal image registration is performed using a three step silhouette-mapping algorithm which detects the rotation, scale and translation between moving objects in the visual, (thermal) infrared and/or depth images. First, moving object silhouettes are extracted to separate the calibration objects, i.e., the foreground, from the static background. Key components are dynamic background subtraction, foreground enhancement and automatic thresholding. Then, 1D contour vectors are generated from the resulting multi-modal silhouettes using silhouette boundary extraction, cartesian to polar transform and radial vector analysis. Next, to retrieve the rotation angle and the scale factor between the multi-sensor image, these contours are mapped on each other using circular cross correlation and contour scaling. Finally, the translation between the images is calculated using maximization of binary correlation. The silhouette coverage analysis also starts with moving object silhouette extraction. Then, it uses the registration information, i.e., rotation angle, scale factor and translation vector, to map the thermal, depth and visual silhouette images on each other. Finally, the coverage of the resulting multi-modal silhouette map is computed and is analyzed over time to reduce false alarms and to improve object detection. Prior experiments on real-world multi-sensor video sequences indicate that automated multi-modal video surveillance is promising. This paper shows that merging information from multi-modal video further increases the detection results

    Video surveillance for monitoring driver's fatigue and distraction

    Get PDF
    Fatigue and distraction effects in drivers represent a great risk for road safety. For both types of driver behavior problems, image analysis of eyes, mouth and head movements gives valuable information. We present in this paper a system for monitoring fatigue and distraction in drivers by evaluating their performance using image processing. We extract visual features related to nod, yawn, eye closure and opening, and mouth movements to detect fatigue as well as to identify diversion of attention from the road. We achieve an average of 98.3% and 98.8% in terms of sensitivity and specificity for detection of driver's fatigue, and 97.3% and 99.2% for detection of driver's distraction when evaluating four video sequences with different drivers

    Lost in Time: Temporal Analytics for Long-Term Video Surveillance

    Full text link
    Video surveillance is a well researched area of study with substantial work done in the aspects of object detection, tracking and behavior analysis. With the abundance of video data captured over a long period of time, we can understand patterns in human behavior and scene dynamics through data-driven temporal analytics. In this work, we propose two schemes to perform descriptive and predictive analytics on long-term video surveillance data. We generate heatmap and footmap visualizations to describe spatially pooled trajectory patterns with respect to time and location. We also present two approaches for anomaly prediction at the day-level granularity: a trajectory-based statistical approach, and a time-series based approach. Experimentation with one year data from a single camera demonstrates the ability to uncover interesting insights about the scene and to predict anomalies reasonably well.Comment: To Appear in Springer LNE
    corecore