8,389 research outputs found

    Autonomous real-time surveillance system with distributed IP cameras

    Get PDF
    An autonomous Internet Protocol (IP) camera based object tracking and behaviour identification system, capable of running in real-time on an embedded system with limited memory and processing power is presented in this paper. The main contribution of this work is the integration of processor intensive image processing algorithms on an embedded platform capable of running at real-time for monitoring the behaviour of pedestrians. The Algorithm Based Object Recognition and Tracking (ABORAT) system architecture presented here was developed on an Intel PXA270-based development board clocked at 520 MHz. The platform was connected to a commercial stationary IP-based camera in a remote monitoring station for intelligent image processing. The system is capable of detecting moving objects and their shadows in a complex environment with varying lighting intensity and moving foliage. Objects moving close to each other are also detected to extract their trajectories which are then fed into an unsupervised neural network for autonomous classification. The novel intelligent video system presented is also capable of performing simple analytic functions such as tracking and generating alerts when objects enter/leave regions or cross tripwires superimposed on live video by the operator

    Framework for real time behavior interpretation from traffic video

    Get PDF
    © 2005 IEEE.Video-based surveillance systems have a wide range of applications for traffic monitoring, as they provide more information as compared to other sensors. In this paper, we present a rule-based framework for behavior and activity detection in traffic videos obtained from stationary video cameras. Moving targets are segmented from the images and tracked in real time. These are classified into different categories using a novel Bayesian network approach, which makes use of image features and image-sequence- based tracking results for robust classification. Tracking and classification results are used in a programmed context to analyze behavior. For behavior recognition, two types of interactions have mainly been considered. One is interaction between two or more mobile targets in the field of view (FoV) of the camera. The other is interaction between targets and stationary objects in the environment. The framework is based on two types of a priori information: 1) the contextual information of the camera’s FoV, in terms of the different stationary objects in the scene and 2) sets of predefined behavior scenarios, which need to be analyzed in different contexts. The system can recognize behavior from videos and give a lexical output of the detected behavior. It also is capable of handling uncertainties that arise due to errors in visual signal processing. We demonstrate successful behavior recognition results for pedestrian– vehicle interaction and vehicle–checkpost interactions.Kumar, P.; Ranganath, S.; Huang Weimin; Sengupta, K

    Behavior interpretation from traffic video streams

    Get PDF
    Copyright © 2003 IEEEThis paper considers video surveillance research applied to traffic video streams. We present a framework for analyzing and recognizing different possible behaviors from image sequences acquired from a fixed camera. Two types of interactions have been mainly considered. In one there is interaction between two or more mobile objects in the field of view (FOV) of the camera. The other is interaction between a mobile object and static objects in the environment. The framework is based on two types of a priori knowledge: (1) the contextual knowledge of the camera's FOV, in terms of the description of the different static objects of the scene and (2) sets of predefined behaviors which need to be analyzed in different contexts. At present the system is designed to recognize behavior from stored videos and retrieve the frames in which the specific behaviors took place. We demonstrate successful behavior recognition results for pedestrian-vehicle interaction and vehicle-checkpost interactions

    Development of Automated Incident Detection System Using Existing ATMS CCTV

    Get PDF
    Indiana Department of Transportation (INDOT) has over 300 digital cameras along highways in populated areas in Indiana. These cameras are used to monitor traffic conditions around the clock, all year round. Currently, the videos from these cameras are observed by human operators. The main objective of this research is to develop an automatic real-time system to monitor traffic conditions using the INDOT CCTV video feeds by a collaborative research team of the Transportation Active Safety Institute (TASI) at Indiana University-Purdue University Indianapolis (IUPUI) and the Traffic Management Center (TMC) of INDOT. In this project, the research team developed the system architecture based on a detailed system requirement analysis. The first prototype of major system components of the system has been implemented. Specifically, the team has successfully accomplished the following: An AI based deep learning algorithm provided in YOLO3 is selected for vehicle detection which generates the best results for daytime videos. The tracking information of moving vehicles is used to derive the locations of roads and lanes. A database is designed as the center place to gather and distribute the information generated from all camera videos. The database provides all information for the traffic incident detection. A web-based Graphical User Interface (GUI) was developed. The automatic traffic incident detection will be implemented after the traffic flow information being derived accurately. The research team is currently in the process of integrating the prototypes of all components of the system together to establish a complete system prototype

    Bayesian network based computer vision algorithm for traffic monitoring using video

    Get PDF
    This paper presents a novel approach to estimating the 3D velocity of vehicles from video. Here we propose using a Bayesian Network to classify objects into pedestrians and different types of vehicles, using 2D features extracted from the video taken from a stationary camera. The classification allows us to estimate an approximate 3D model for the different classes. The height information is then used with the image co-ordinates of the object and the camera's perspective projection matrix to estimate the objects 3D world co-ordinates and hence its 3D velocity. Accurate velocity and acceleration estimates are both very useful parameters in traffic monitoring systems. We show results of highly accurate classification and measurement of vehicle's motion from real life traffic video streams.Kumar, P.; Ranganath, S.; Weimin, H
    corecore