846 research outputs found
A Comprehensive Review of Vehicle Detection Techniques Under Varying Moving Cast Shadow Conditions Using Computer Vision and Deep Learning
Design of a vision-based traffic analytic system for urban traffic video scenes has a great potential in context of Intelligent Transportation System (ITS). It offers useful traffic-related insights at much lower costs compared to their conventional sensor based counterparts. However, it remains a challenging problem till today due to the complexity factors such as camera hardware constraints, camera movement, object occlusion, object speed, object resolution, traffic flow density, and lighting conditions etc. ITS has many applications including and not just limited to queue estimation, speed detection and different anomalies detection etc. All of these applications are primarily dependent on sensing vehicle presence to form some basis for analysis. Moving cast shadows of vehicles is one of the major problems that affects the vehicle detection as it can cause detection and tracking inaccuracies. Therefore, it is exceedingly important to distinguish dynamic objects from their moving cast shadows for accurate vehicle detection and recognition. This paper provides an in-depth comparative analysis of different traffic paradigm-focused conventional and state-of-the-art shadow detection and removal algorithms. Till date, there has been only one survey which highlights the shadow removal methodologies particularly for traffic paradigm. In this paper, a total of 70 research papers containing results of urban traffic scenes have been shortlisted from the last three decades to give a comprehensive overview of the work done in this area. The study reveals that the preferable way to make a comparative evaluation is to use the existing Highway I, II, and III datasets which are frequently used for qualitative or quantitative analysis of shadow detection or removal algorithms. Furthermore, the paper not only provides cues to solve moving cast shadow problems, but also suggests that even after the advent of Convolutional Neural Networks (CNN)-based vehicle detection methods, the problems caused by moving cast shadows persists. Therefore, this paper proposes a hybrid approach which uses a combination of conventional and state-of-the-art techniques as a pre-processing step for shadow detection and removal before using CNN for vehicles detection. The results indicate a significant improvement in vehicle detection accuracies after using the proposed approach
Shadow removal utilizing multiplicative fusion of texture and colour features for surveillance image
Automated surveillance systems often identify shadows as parts of a moving object which jeopardized subsequent image processing tasks such as object identification and tracking. In this thesis, an improved shadow elimination method for an indoor surveillance system is presented. This developed method is a fusion of several image processing methods. Firstly, the image is segmented using the Statistical Region Merging algorithm to obtain the segmented potential shadow regions. Next, multiple shadow identification features which include Normalized Cross-Correlation, Local Color Constancy and Hue-Saturation-Value shadow cues are applied on the images to generate feature maps. These feature maps are used for identifying and removing cast shadows according to the segmented regions. The video dataset used is the Autonomous Agents for On-Scene Networked Incident Management which covers both indoor and outdoor video scenes. The benchmarking result indicates that the developed method is on-par with several normally used shadow detection methods. The developed method yields a mean score of 85.17% for the video sequence in which the strongest shadow is present and a mean score of 89.93% for the video having the most complex textured background. This research contributes to the development and improvement of a functioning shadow eliminator method that is able to cope with image noise and various illumination changes
Recommended from our members
Illumination Variance In Video Processing
Doctor of Philosophy and was awarded by Brunel University LondonInthisthesiswefocusontheimpactofilluminationchangesinvideoand we discuss how we can minimize the impact of illumination variance in video processing systems. Identifyingandremovingshadowsautomaticallyisaverywellestablished and an important topic in image and video processing. Having shadowless image data would benefit many other systems such as video surveillance, tracking and object recognition algorithms. Anovelapproachtoautomaticallydetectandremoveshadowsispresented in this paper. This new method is based on the observation that, owing to the relative movement of the sun, the length and position of a shadow changes linearly over a relatively long period of time in outdoor environments,wecanconvenientlydistinguishashadowfromotherdark regions in an input video. Then we can identify the Reference Shadow as the one with the highest confidence of the mentioned linear changes. Once one shadow is detected, the rest of the shadow can also be identifiedandremoved. Wehaveprovidedmanyexperimentsandourmethod is fully capable of detecting and removing the shadows of stationary and moving objects. Additionally we have explained how reference shadows can be used to detect textures that reflect the light and shiny materials such as metal, glass and water. ..
ROBUST TECHNIQUES FOR VISUAL SURVEILLANCE
The work described here aims at improving the performance of three building blocks of visual surveillance systems: foreground detection, object tracking and event detection.
First, a new background subtraction algorithm is presented for foreground detection. The background model is built with a set of codewords for every pixel. The codeword contains the pixel's principle color and a tangent vector that represents the color variation at that pixel. As the scene illumination changes, a pixel's color is predicted using a linear model of the codeword and the codeword, in turn, is updated using the new observation. We carried out a number of experiments on sequences that have extensive lighting change and compare with previously developed algorithms.
Second, we describe a multi-resolution tracking framework developed with efficiency and robustness in mind. Efficiency is achieved by processing low resolution data whenever possible. Robustness results from multiple level coarse-to-fine searching in the tracking state space. We combine sequential filtering both in time and resolution levels int a probabilistic framework. A color blob tracker is implemented and the tracking results are evaluated in a number of experiments.
Third, we present a tracking algorithm based on motion analysis of regional affine invariant image features. The tracked object is represented with a probabilistic occupancy map. Using this map as support, regional features are detected and matched across frames. The motion of pixels is then established based on the feature motion. The object occupancy map is in turn updated according to the pixel motion consistency. We describe experiments to measure the sensitivity of our approach to inaccuracy in initialization, and compare it with other approaches.
Fourth, we address the problem of visual event recognition in surveillance where noise and missing observations are serious problems. Common sense domain knowledge is exploited to overcome them. The knowledge is represented as first-
order logic production rules with associated weights to indicate their confidence. These rules are used in combination with a relaxed deduction algorithm to construct a network of grounded atoms, the Markov Logic Network. The network is used to perform probabilistic inference for input queries about events of interest. The system's performance is demonstrated on a number of videos from a parking lot domain that contains complex interactions of people and vehicles
Research on robust salient object extraction in image
制度:新 ; 文部省報告番号:甲2641号 ; 学位の種類:博士(工学) ; 授与年月日:2008/3/15 ; 早大学位記番号:新480
- …