111,239 research outputs found
Autonomous real-time surveillance system with distributed IP cameras
An autonomous Internet Protocol (IP) camera based object tracking and behaviour identification system, capable of running in real-time on an embedded system with limited memory and processing power is presented in this paper. The main contribution of this work is the integration of processor intensive image processing algorithms on an embedded platform capable of running at real-time for monitoring the behaviour of pedestrians. The Algorithm Based Object Recognition and Tracking (ABORAT) system architecture presented here was developed on an Intel PXA270-based development board clocked at 520 MHz. The platform was connected to a commercial stationary IP-based camera in a remote monitoring station for intelligent image
processing. The system is capable of detecting moving objects and their shadows in a complex environment with varying lighting intensity and moving foliage. Objects
moving close to each other are also detected to extract their trajectories which are then fed into an unsupervised neural network for autonomous classification. The novel intelligent video system presented is also capable of performing simple analytic functions such as tracking and generating alerts when objects enter/leave regions or cross tripwires superimposed on live video by the operator
Recommended from our members
Key-point based tracking for illegally parked vehicle detection
This research aims to develop a target detection and tracking system that can realize real-time video surveillance. The purpose of the research is to realize a monitoring application that can run automatically and intelligently to detect and track illegally parked vehicles. Since the application scenario of the algorithm is a real traffic environment, it must be able to adapt to complex environmental interference, such as drastic changes in lighting conditions, frequent occlusion, and long-term stable tracking.
The thesis shows the detailed design process and test results of the system. This algorithm combines the target detection function based on deep learning network and the multi-object tracking algorithm based on key point matching. The method shown in the thesis focuses on detecting and tracking stationary vehicles in the no parking area. An object detection algorithm based on a deep learning network is used to recognize vehicles. Once the recognized vehicle is defined as an illegally parked vehicle through the determination of its motion state and location, an algorithm based on key-point matching is developed and tracked for this type of vehicle. If the target is still stationary in the no parking area after a period, the system will generate an alarm.
The method was tested in more than 20 hours of video. The video comes from public database and our own. They all show real surveillance scenes, including different time periods of the day and different locations. The test results show that the method achieves 100% in precision (also called positive predictive value), 95% in recall (also known as sensitivity) and 97% in F1 (a measure that combines precision and recall). The results obtained also produce better detection and tracking compared to other comparable methods
Video tracking of people under severe occlusions
University of Technology, Sydney. Faculty of Engineering and Information Technology.Video surveillance in dynamic scenes, especially for humans and vehicles, is currently one of the most active research topics in computer vision and pattern recognition. The goal of this research is to develop a real-time automatic tracking system which is both reliable and efficient by utilizing computational approaches. The literature has presented many valuable methods on object tracking; however, most of those algorithms can only perform effectively under simple scenarios. There are a few algorithms which attempt to accomplish object tracking in a complex dynamic scene and have successfully achieved their goals when the dynamic scene is not too complex. However no system yet is capable of accurately handling object tracking, especially human tracking, in a crowded environment with frequent and continuous occlusions. Therefore, the goal of my research is to develop an effective human tracking algorithm which takes into account and overcomes the various factors involved in a complex dynamic scene. The founding idea is that of dividing the human figure into five main parts, and track each individually under a constraint of integrity. Data association in new frames is performed on each part, and is inferred for the whole human figure through a fusion rule. This approach has proved a good trade off between model complexity and actual computability. Experimental results have confirmed the effectiveness of the methodology
Robust Mobile Object Tracking Based on Multiple Feature Similarity and Trajectory Filtering
This paper presents a new algorithm to track mobile objects in different
scene conditions. The main idea of the proposed tracker includes estimation,
multi-features similarity measures and trajectory filtering. A feature set
(distance, area, shape ratio, color histogram) is defined for each tracked
object to search for the best matching object. Its best matching object and its
state estimated by the Kalman filter are combined to update position and size
of the tracked object. However, the mobile object trajectories are usually
fragmented because of occlusions and misdetections. Therefore, we also propose
a trajectory filtering, named global tracker, aims at removing the noisy
trajectories and fusing the fragmented trajectories belonging to a same mobile
object. The method has been tested with five videos of different scene
conditions. Three of them are provided by the ETISEO benchmarking project
(http://www-sop.inria.fr/orion/ETISEO) in which the proposed tracker
performance has been compared with other seven tracking algorithms. The
advantages of our approach over the existing state of the art ones are: (i) no
prior knowledge information is required (e.g. no calibration and no contextual
models are needed), (ii) the tracker is more reliable by combining multiple
feature similarities, (iii) the tracker can perform in different scene
conditions: single/several mobile objects, weak/strong illumination,
indoor/outdoor scenes, (iv) a trajectory filtering is defined and applied to
improve the tracker performance, (v) the tracker performance outperforms many
algorithms of the state of the art
Real-time Spatial Detection and Tracking of Resources in a Construction Environment
Construction accidents with heavy equipment and bad decision making can be based on poor knowledge of the site environment and in both cases may lead to work interruptions and costly delays. Supporting the construction environment with real-time generated three-dimensional (3D) models can help preventing accidents as well as support management by modeling infrastructure assets in 3D. Such models can be integrated in the path planning of construction equipment operations for obstacle avoidance or in a 4D model that simulates construction processes. Detecting and guiding resources, such as personnel, machines and materials in and to the right place on time requires methods and technologies supplying information in real-time. This paper presents research in real-time 3D laser scanning and modeling using high range frame update rate scanning technology. Existing and emerging sensors and techniques in three-dimensional modeling are explained. The presented research successfully developed computational models and algorithms for the real-time detection, tracking, and three-dimensional modeling of static and dynamic construction resources, such as workforce, machines, equipment, and materials based on a 3D video range camera. In particular, the proposed algorithm for rapidly modeling three-dimensional scenes is explained. Laboratory and outdoor field experiments that were conducted to validate the algorithm’s performance and results are discussed
- …