2,641 research outputs found

    Low complexity object detection with background subtraction for intelligent remote monitoring

    Get PDF

    Dual-rate background subtraction approach for estimating traffic queue parameters in urban scenes

    Get PDF
    This study proposes traffic queue-parameter estimation based on background subtraction. An appropriate combination of two background models is used: a short-term model, very sensitive to moving vehicles, and a long-term model capable of retaining as foreground temporarily stopped vehicles at intersections or traffic lights. Experimental results in typical urban scenes demonstrate the suitability of the proposed approach. Its main advantage is the low computational cost, avoiding specific motion detection algorithms or post-processing operations after foreground vehicle detection.Ministerio de Educación y Ciencia DPI2010-19154Consejería de Innovación, Ciencia y Empresa P07-TIC-0262

    A comprehensive review of vehicle detection using computer vision

    Get PDF
    A crucial step in designing intelligent transport systems (ITS) is vehicle detection. The challenges of vehicle detection in urban roads arise because of camera position, background variations, occlusion, multiple foreground objects as well as vehicle pose. The current study provides a synopsis of state-of-the-art vehicle detection techniques, which are categorized according to motion and appearance-based techniques starting with frame differencing and background subtraction until feature extraction, a more complicated model in comparison. The advantages and disadvantages among the techniques are also highlighted with a conclusion as to the most accurate one for vehicle detection

    Computer Vision Techniques for Background Modeling in Urban Traffic Monitoring

    Get PDF
    Jose Manuel Milla, Sergio Luis Toral, Manuel Vargas and Federico Barrero (2010). Computer Vision Techniques for Background Modeling in Urban Traffic Monitoring, Urban Transport and Hybrid Vehicles, Seref Soylu (Ed.), ISBN: 978-953-307-100-8, InTech, DOI: 10.5772/10179. Available from: http://www.intechopen.com/books/urban-transport-and-hybrid-vehicles/computer-vision-techniques-for-background-modeling-in-urban-traffic-monitoringIn this chapter, several background modelling techniques have been described, analyzed and tested. In particular, different algorithms based on sigma-delta filter have been considered due to their suitability for embedded systems, where computational limitations affect a real-time implementation. A qualitative and a quantitative comparison have been performed among the different algorithms. Obtained results show that the sigma-delta algorithm with confidence measurement exhibits the best performance in terms of adaptation to particular specificities of urban traffic scenes and in terms of computational requirements. A prototype based on an ARM processor has been implemented to test the different versions of the sigma-delta algorithm and to illustrate several applications related to vehicle traffic monitoring and implementation details

    ViBe: A universal background subtraction algorithm for video sequences

    Full text link
    This paper presents a technique for motion detection that incorporates several innovative mechanisms. For example, our proposed technique stores, for each pixel, a set of values taken in the past at the same location or in the neighborhood. It then compares this set to the current pixel value in order to determine whether that pixel belongs to the background, and adapts the model by choosing randomly which values to substitute from the background model. This approach differs from those based on the classical belief that the oldest values should be replaced first. Finally, when the pixel is found to be part of the background, its value is propagated into the background model of a neighboring pixel. We describe our method in full details (including pseudocode and the parameter values used) and compare it to other background subtraction techniques. Efficiency figures show that our method outperforms recent and proven state-of-the-art methods in terms of both computation speed and detection rate. We also analyze the performance of a downscaled version of our algorithm to the absolute minimum of one comparison and one byte of memory per pixel. It appears that even such a simplified version of our algorithm performs better than mainstream techniques. There is a dedicated web page for ViBe at http://www.telecom.ulg.ac.be/research/vibe

    Automatic Vehicle Detection, Tracking and Recognition of License Plate in Real Time Videos

    Get PDF
    Automatic video analysis from traffic surveillance cameras is a fast-emerging field based on computer vision techniques. It is a key technology to public safety, intelligent transport system (ITS) and for efficient management of traffic. In recent years, there has been an increased scope for automatic analysis of traffic activity. We define video analytics as computer-vision-based surveillance algorithms and systems to extract contextual information from video. In traffic scenarios several monitoring objectives can be supported by the application of computer vision and pattern recognition techniques, including the detection of traffic violations (e.g., illegal turns and one-way streets) and the identification of road users (e.g., vehicles, motorbikes, and pedestrians). Currently most reliable approach is through the recognition of number plates, i.e., automatic number plate recognition (ANPR), which is also known as automatic license plate recognition (ALPR), or radio frequency transponders. Here full-featured automatic system for vehicle detection, tracking and license plate recognition is presented. This system has many applications in pattern recognition and machine vision and they ranges from complex security systems to common areas and from parking admission to urban traffic control. This system has complex characteristics due to diverse effects as fog, rain, shadows, uneven illumination conditions, occlusion, variable distances, velocity of car, scene's angle in frame, rotation of plate, number of vehicles in the scene and others. The main objective of this work is to show a system that solves the practical problem of car identification for real scenes. All steps of the process, from video acquisition to optical character recognition are considered to achieve an automatic identification of plates

    Real Time Extraction of Human Gait Features for Recognition

    Get PDF
    Human motion analysis has received a great attention from researchers in the last decade due to its potential use in different applications such as automated visual surveillance. This field of research focuses on human activities, including people identification. Human gait is a new biometric indicator in visual surveillance system. It can recognize individuals as the way they walk. In the walking process, the human body shows regular periodic variation, such as upper and lower limbs, knee point, thigh point, stride parameters (stride length, Cadence, gait cycle), height, etc. This reflects the individual’s unique movement pattern. In gait recognition, detection of moving people from a video is important for feature extraction. Height is one of the important features from the several gait features which is not influenced by the camera performance, distance and clothing style of the subject. Detection of people in video streams is the first relevant step of information and background subtraction is a very popular approach for foreground segmentation. In this thesis, different background subtraction methods have been simulated to overcome the problem of illumination variation, repetitive motions from background clutter, shadows, long term scene changes and camouflage. But background subtraction lacks capability to remove shadows. So different shadows detection methods have been tried out using RGB, YCbCr, and HSV color components to suppress shadows. These methods have been simulated and quantitative performance evaluated on different indoor video sequence. Then the research on shadow model has been extended to optimize the threshold values of HSV color space for shadow suppression with respect to the average intensity of local shadow region. A mathematical model is developed between the average intensity and the threshold values.Further a new method is proposed here to calculate the variation of height during walking. The measurement of height of a person is not affected by his clothing style as well as the distance from the camera. At any distance the height can be measured, but for that camera calibration is essential. DLT method is used to find the height of a moving person for each frame using intrinsic as well as extrinsic parameters. Another parameter known as stride, function of height, is extracted using bounding box technique. As human walking style is periodic so the accumulation of height and stride parameter will give a periodic signal. Human identification is done by using theses parameters. The height variation and stride variation signals are sampled to get further analyzed using DCT (Discrete Cosine Transformation), DFT (Discrete Fourier Transformation), and DHT (Discrete Heartily Transformation) techniques. N - harmonics are selected from the transformation coefficients. These coefficients are known as feature vectors which are stored in the database. Euclidian distance and MSE are calculated on these feature vectors. When feature vectors of same subject are compared, then a maximum value of MSE is selected, known as Self-Recognition Threshold (SRT). Its value is different for different transformation techniques. It is used to identify individuals. Again we have discussed on Model based method to detect the thigh angle. But thigh angle of one leg can’t be detected over a period of walking. Because one leg is occluded by the other leg. So stride parameter is used to estimate the thigh angle
    corecore