1,594 research outputs found

    Novel statistical modeling methods for traffic video analysis

    Get PDF
    Video analysis is an active and rapidly expanding research area in computer vision and artificial intelligence due to its broad applications in modern society. Many methods have been proposed to analyze the videos, but many challenging factors remain untackled. In this dissertation, four statistical modeling methods are proposed to address some challenging traffic video analysis problems under adverse illumination and weather conditions. First, a new foreground detection method is presented to detect the foreground objects in videos. A novel Global Foreground Modeling (GFM) method, which estimates a global probability density function for the foreground and applies the Bayes decision rule for model selection, is proposed to model the foreground globally. A Local Background Modeling (LBM) method is applied by choosing the most significant Gaussian density in the Gaussian mixture model to model the background locally for each pixel. In addition, to mitigate the correlation effects of the Red, Green, and Blue (RGB) color space on the independence assumption among the color component images, some other color spaces are investigated for feature extraction. To further enhance the discriminatory power of the input feature vector, the horizontal and vertical Haar wavelet features and the temporal information are integrated into the color features to define a new 12-dimensional feature vector space. Finally, the Bayes classifier is applied for the classification of the foreground and the background pixels. Second, a novel moving cast shadow detection method is presented to detect and remove the cast shadows from the foreground. Specifically, a set of new chromatic criteria is presented to detect the candidate shadow pixels in the Hue, Saturation, and Value (HSV) color space. A new shadow region detection method is then proposed to cluster the candidate shadow pixels into shadow regions. A statistical shadow model, which uses a single Gaussian distribution to model the shadow class, is presented to classify shadow pixels. Additionally, an aggregated shadow detection strategy is presented to integrate the shadow detection results and remove the shadows from the foreground. Third, a novel statistical modeling method is presented to solve the automated road recognition problem for the Region of Interest (RoI) detection in traffic video analysis. A temporal feature guided statistical modeling method is proposed for road modeling. Additionally, a model pruning strategy is applied to estimate the road model. Then, a new road region detection method is presented to detect the road regions in the video. The method applies discriminant functions to classify each pixel in the estimated background image into a road class or a non-road class, respectively. The proposed method provides an intra-cognitive communication mode between the RoI selection and video analysis systems. Fourth, a novel anomalous driving detection method in videos, which can detect unsafe anomalous driving behaviors is introduced. A new Multiple Object Tracking (MOT) method is proposed to extract the velocities and trajectories of moving foreground objects in video. The new MOT method is a motion-based tracking method, which integrates the temporal and spatial features. Then, a novel Gaussian Local Velocity (GLV) modeling method is presented to model the normal moving behavior in traffic videos. The GLV model is built for every location in the video frame, and updated online. Finally, a discriminant function is proposed to detect anomalous driving behaviors. To assess the feasibility of the proposed statistical modeling methods, several popular public video datasets, as well as the real traffic videos from the New Jersey Department of Transportation (NJDOT) are applied. The experimental results show the effectiveness and feasibility of the proposed methods

    Insignificant shadow detection for video segmentation

    Get PDF
    To prevent moving cast shadows from being misunderstood as part of moving objects in change detection based video segmentation, this paper proposes a novel approach to the cast shadow detection based on the edge and region information in multiple frames. First, an initial change detection mask containing moving objects and cast shadows is obtained. Then a Canny edge map is generated. After that, the shadow region is detected and removed through multiframe integration, edge matching, and region growing. Finally, a post processing procedure is used to eliminate noise and tune the boundaries of the objects. Our approach can be used for video segmentation in indoor environment. The experimental results demonstrate its good performance

    Shadow Detection using DWT with Multi-Wavelet Selection & user Configurable Variance Parameters

    Get PDF
    Moving cast shadows are a noteworthy worry in today's execution from expansive scope of numerous vision-based observation applications in light of the fact that they exceedingly troublesome the item characterization assignment. A few shadow identification strategies have been accounted for in the writing amid the most recent years. They are for the most part partitioned into two spaces. One more often than not works with static pictures, though the second one uses picture arrangements, to be specific video content. Regardless of the way that both cases can be similarly dissected, there is a distinction in the application field. The main case, shadow identification strategies can be misused to get extra geometric and semantic signs about shape and position of its throwing article ('shape from shadows') and the restriction of the light source. While in the second one, the primary reason for existing is normally change discovery, scene coordinating or reconnaissance (for the most part in a foundation subtraction connection). In our examination we have fundamentally focusssed on the identification of shadow from the facilitating so as to move article through a video observation test multi-wavelet choice and client configurable difference parameters. In our test client can pick the diverse wavelets and change parameters. Edge model based super determination technique is utilized to improve results. Additionally the impact of advanced watermarking is concentrated on for the super-determined VOP(Video articles planes). Various experiments have been proposed and figured out a best system for video reconnaissance application. Our proposed super determination (SR) system gives preferred results over bilinear and bi-cubic routines

    Advanced traffic video analytics for robust traffic accident detection

    Get PDF
    Automatic traffic accident detection is an important task in traffic video analysis due to its key applications in developing intelligent transportation systems. Reducing the time delay between the occurrence of an accident and the dispatch of the first responders to the scene may help lower the mortality rate and save lives. Since 1980, many approaches have been presented for the automatic detection of incidents in traffic videos. In this dissertation, some challenging problems for accident detection in traffic videos are discussed and a new framework is presented in order to automatically detect single-vehicle and intersection traffic accidents in real-time. First, a new foreground detection method is applied in order to detect the moving vehicles and subtract the ever-changing background in the traffic video frames captured by static or non-stationary cameras. For the traffic videos captured during day-time, the cast shadows degrade the performance of the foreground detection and road segmentation. A novel cast shadow detection method is therefore presented to detect and remove the shadows cast by moving vehicles and also the shadows cast by static objects on the road. Second, a new method is presented to detect the region of interest (ROI), which applies the location of the moving vehicles and the initial road samples and extracts the discriminating features to segment the road region. After detecting the ROI, the moving direction of the traffic is estimated based on the rationale that the crashed vehicles often make rapid change of direction. Lastly, single-vehicle traffic accidents and trajectory conflicts are detected using the first-order logic decision-making system. The experimental results using publicly available videos and a dataset provided by the New Jersey Department of Transportation (NJDOT) demonstrate the feasibility of the proposed methods. Additionally, the main challenges and future directions are discussed regarding (i) improving the performance of the foreground segmentation, (ii) reducing the computational complexity, and (iii) detecting other types of traffic accidents

    Detecting and Shadows in the HSV Color Space using Dynamic Thresholds

    Get PDF
    The detection of moving objects in a video sequence is an essential step in almost all the systems of vision by computer. However, because of the dynamic change in natural scenes, the detection of movement becomes a more difficult task. In this work, we propose a new method for the detection moving objects that is robust to shadows, noise and illumination changes. For this purpose, the detection phase of the proposed method is an adaptation of the MOG approach where the foreground is extracted by considering the HSV color space. To allow the method not to take shadows into consideration during the detection process, we developed a new shade removal technique based on a dynamic thresholding of detected pixels of the foreground. The calculation model of the threshold is established by two statistical analysis tools that take into account the degree of the shadow in the scene and the robustness to noise.  Experiments undertaken on a set of video sequences showed that the method put forward provides better results compared to existing methods that are limited to using static thresholds

    Color models of shadow detection in video scenes

    Get PDF
    In this paper we address the problem of appropriate modelling of shadows in color images. While previous works compared the different approaches regarding their model structure, a comparative study of color models has still missed. This paper attacks a continuous need for defining the appropriate color space for this main surveillance problem. We introduce a statistical and parametric shadow model-framework, which can work with different color spaces, and perform a detailed comparision with it. We show experimental results regarding the following questions: (1) What is the gain of using color images instead of grayscale ones? (2) What is the gain of using uncorrelated spaces instead of the standard RGB? (3) Chrominance (illumination invariant), luminance, or ”mixed” spaces are more effective? (4) In which scenes are the differences significant? We qualified the metrics both in color based clustering of the individual pixels and in the case of Bayesian foreground-background-shadow segmentation. Experimental results on real-life videos show that CIE L*u*v* color space is the most efficient

    A new strategy of detecting traffic information based on traffic camera : modified inverse perspective mapping

    Get PDF
    The development of Intelligent Transportation Systems (ITS) needs high quality traffic information such as intersections, but conventional image-based traffic detection methods have difficulties with perspective and background noise, shadows and lighting transitions. In this paper, we propose a new traffic information detection method based on Modified Inverse Perspective Mapping (MIPM) to perform under these challenging conditions. In our proposed method, first the perspective is removed from the images using the Modified Inverse Perspective Mapping (MIPM); afterward, Hough transform is applied to extract structural information like road lines and lanes; then, Gaussian Mixture Models are used to generate the binary image. Meanwhile, to tackle shadow effect in car areas, we have applied a chromacity-base strategy. To evaluate the performance of the proposed method, we used several video sequences as benchmarks. These videos are captured in normal weather from a high way, and contain different types of locations and occlusions between cars. Our simulation results indicate that the proposed algorithms and frameworks are effective, robust and more accurate compared to other frameworks, especially in facing different kinds of occlusions

    Feature-based image patch classification for moving shadow detection

    Get PDF
    Moving object detection is a first step towards many computer vision applications, such as human interaction and tracking, video surveillance, and traffic monitoring systems. Accurate estimation of the target object’s size and shape is often required before higher-level tasks (e.g., object tracking or recog nition) can be performed. However, these properties can be derived only when the foreground object is detected precisely. Background subtraction is a common technique to extract foreground objects from image sequences. The purpose of background subtraction is to detect changes in pixel values within a given frame. The main problem with background subtraction and other related object detection techniques is that cast shadows tend to be misclassified as either parts of the foreground objects (if objects and their cast shadows are bonded together) or independent foreground objects (if objects and shadows are separated). The reason for this phenomenon is the presence of similar characteristics between the target object and its cast shadow, i.e., shadows have similar motion, attitude, and intensity changes as the moving objects that cast them. Detecting shadows of moving objects is challenging because of problem atic situations related to shadows, for example, chromatic shadows, shadow color blending, foreground-background camouflage, nontextured surfaces and dark surfaces. Various methods for shadow detection have been proposed in the liter ature to address these problems. Many of these methods use general-purpose image feature descriptors to detect shadows. These feature descriptors may be effective in distinguishing shadow points from the foreground object in a specific problematic situation; however, such methods often fail to distinguish shadow points from the foreground object in other situations. In addition, many of these moving shadow detection methods require prior knowledge of the scene condi tions and/or impose strong assumptions, which make them excessively restrictive in practice. The aim of this research is to develop an efficient method capable of addressing possible environmental problems associated with shadow detection while simultaneously improving the overall accuracy and detection stability. In this research study, possible problematic situations for dynamic shad ows are addressed and discussed in detail. On the basis of the analysis, a ro bust method, including change detection and shadow detection, is proposed to address these environmental problems. A new set of two local feature descrip tors, namely, binary patterns of local color constancy (BPLCC) and light-based gradient orientation (LGO), is introduced to address the identified problematic situations by incorporating intensity, color, texture, and gradient information. The feature vectors are concatenated in a column-by-column manner to con struct one dictionary for the objects and another dictionary for the shadows. A new sparse representation framework is then applied to find the nearest neighbor of the test image segment by computing a weighted linear combination of the reference dictionary. Image segment classification is then performed based on the similarity between the test image and the sparse representations of the two classes. The performance of the proposed framework on common shadow detec tion datasets is evaluated, and the method shows improved performance com pared with state-of-the-art methods in terms of the shadow detection rate, dis crimination rate, accuracy, and stability. By achieving these significant improve ments, the proposed method demonstrates its ability to handle various problems associated with image processing and accomplishes the aim of this thesis
    corecore