4,103 research outputs found

    A system for learning statistical motion patterns

    Get PDF
    Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy k-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction

    A system for learning statistical motion patterns

    Get PDF
    Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy k-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction

    Inference of Non-Overlapping Camera Network Topology using Statistical Approaches

    Get PDF
    This work proposes an unsupervised learning model to infer the topological information of a camera network automatically. This algorithm works on non-overlapped and overlapped cameras field of views (FOVs). The constructed model detects the entry/exit zones of the moving objects across the cameras FOVs using the Data-Spectroscopic method. The probabilistic relationships between each pair of entry/exit zones are learnt to localize the camera network nodes. Increase the certainty of the probabilistic relationships using Computer-Generating to create more Monte Carlo observations of entry/exit points. Our method requires no assumptions, no processors for each camera and no communication among the cameras. The purpose is to figure out the relationship between each pair of linked cameras using the statistical approaches which help to track the moving objects depending on their present location. The Output is shown as a Markov chain model that represents the weighted-unit links between each pair of cameras FOV

    Auto Detection of Number Plate of Person without Helmet

    Get PDF
    Automated Number Plate Recognition organization would greatly enhance the ability of police to detect criminal commotion that involves the use of motor vehicles. Automatic video investigation from traffic surveillance cameras is a fast-emerging field based on workstation vision techniques. It is a key technology to public safety, intelligent transport system (ITS) and for efficient administration of traffic without wearing helmet. In recent years, there has been an increased scope for involuntary analysis of traffic activity. It defines video analytics as computer-vision-based supervision algorithms and systems to extract contextual information from video. In traffic circumstancesnumeroussupervise objectives can be continue by the application of computer vision and pattern gratitude techniques, including the recognition of traffic violations (e.g., illegal turns and one-way streets) and the classification of road users (e.g., vehicles, motorbikes, and pedestrians). Currently most reliable approach is through the acknowledgment of number plates, i.e., automatic number plate recognition (ANPR)

    Response to automatic speed control in urban areas: A simulator study.

    Get PDF
    Speed affects both the likelihood and severity of an accident. Attempts to reduce speed have centred around road design and traffic calming, enforcement and feedback techniques and public awareness campaigns. However, although these techniques have met with some success, they can be both costly and context specific. No single measure has proved to be a generic countermeasure effective in reducing speed, leading to the suggestion that speed needs to be controlled at the source, i.e. within the vehicle. An experiment carried out on the University of Leeds Advanced Driving Simulator evaluated the effects of speed limiters on driver behavionr. Safety was measured using following behaviour, gap acceptance and traffic violations, whilst subjective mental workload was recorded using the NASA RTLX. It was found that although safety benefits were observed in terms of lower speeds, longer headways and fewer traffic light violations, drivers compensated for loss of time by exhibiting riskier gap acceptance behaviour and delayed braking behaviour. When speed limited, drivers' self-reports indicated that their driving performance improved and less physical effort was required, but that they also experienced increases in feelings of frustration and time pressure. It is discussed that there is a need for a total integrated assessment of the long term effects of speed limiters on safety, costs, energy, pollution, noise, in addition to investigation of issues of acceptability by users and car manufacturers

    Tools for Advanced Video Metadata Modeling

    Get PDF
    In this Thesis, we focus on problems in surveillance video analysis and propose advanced metadata modeling techniques to address them. First, we explore the problem of constructing a snapshot summary of people in a video sequence. We propose an algorithm based on the eigen-analysis of faces and present an evaluation of the method. Second, we present an algorithm to learn occlusion points in a scene using long observations of moving objects, provide an implementation and evaluate its performance. Third, to address the problem of availability and storage of surveillance videos, we propose a novel methodology to simulate video metadata. The technique is completely automated and can generate metadata for any scenario with minimal user interaction. Finally, a threat detection model using activity analysis and trajectory data of moving objects is proposed and implemented. The collection of tools presented in this Thesis provides a basis for higher level video analysis algorithms

    Scene Monitoring With A Forest Of Cooperative Sensors

    Get PDF
    In this dissertation, we present vision based scene interpretation methods for monitoring of people and vehicles, in real-time, within a busy environment using a forest of co-operative electro-optical (EO) sensors. We have developed novel video understanding algorithms with learning capability, to detect and categorize people and vehicles, track them with in a camera and hand-off this information across multiple networked cameras for multi-camera tracking. The ability to learn prevents the need for extensive manual intervention, site models and camera calibration, and provides adaptability to changing environmental conditions. For object detection and categorization in the video stream, a two step detection procedure is used. First, regions of interest are determined using a novel hierarchical background subtraction algorithm that uses color and gradient information for interest region detection. Second, objects are located and classified from within these regions using a weakly supervised learning mechanism based on co-training that employs motion and appearance features. The main contribution of this approach is that it is an online procedure in which separate views (features) of the data are used for co-training, while the combined view (all features) is used to make classification decisions in a single boosted framework. The advantage of this approach is that it requires only a few initial training samples and can automatically adjust its parameters online to improve the detection and classification performance. Once objects are detected and classified they are tracked in individual cameras. Single camera tracking is performed using a voting based approach that utilizes color and shape cues to establish correspondence in individual cameras. The tracker has the capability to handle multiple occluded objects. Next, the objects are tracked across a forest of cameras with non-overlapping views. This is a hard problem because of two reasons. First, the observations of an object are often widely separated in time and space when viewed from non-overlapping cameras. Secondly, the appearance of an object in one camera view might be very different from its appearance in another camera view due to the differences in illumination, pose and camera properties. To deal with the first problem, the system learns the inter-camera relationships to constrain track correspondences. These relationships are learned in the form of multivariate probability density of space-time variables (object entry and exit locations, velocities, and inter-camera transition times) using Parzen windows. To handle the appearance change of an object as it moves from one camera to another, we show that all color transfer functions from a given camera to another camera lie in a low dimensional subspace. The tracking algorithm learns this subspace by using probabilistic principal component analysis and uses it for appearance matching. The proposed system learns the camera topology and subspace of inter-camera color transfer functions during a training phase. Once the training is complete, correspondences are assigned using the maximum a posteriori (MAP) estimation framework using both the location and appearance cues. Extensive experiments and deployment of this system in realistic scenarios has demonstrated the robustness of the proposed methods. The proposed system was able to detect and classify targets, and seamlessly tracked them across multiple cameras. It also generated a summary in terms of key frames and textual description of trajectories to a monitoring officer for final analysis and response decision. This level of interpretation was the goal of our research effort, and we believe that it is a significant step forward in the development of intelligent systems that can deal with the complexities of real world scenarios
    corecore