22 research outputs found
ORGB: Offset Correction in RGB Color Space for Illumination-Robust Image Processing
Single materials have colors which form straight lines in RGB space. However,
in severe shadow cases, those lines do not intersect the origin, which is
inconsistent with the description of most literature. This paper is concerned
with the detection and correction of the offset between the intersection and
origin. First, we analyze the reason for forming that offset via an optical
imaging model. Second, we present a simple and effective way to detect and
remove the offset. The resulting images, named ORGB, have almost the same
appearance as the original RGB images while are more illumination-robust for
color space conversion. Besides, image processing using ORGB instead of RGB is
free from the interference of shadows. Finally, the proposed offset correction
method is applied to road detection task, improving the performance both in
quantitative and qualitative evaluations.Comment: Project website: https://baidut.github.io/ORGB
Shadow Optimization from Structured Deep Edge Detection
Local structures of shadow boundaries as well as complex interactions of
image regions remain largely unexploited by previous shadow detection
approaches. In this paper, we present a novel learning-based framework for
shadow region recovery from a single image. We exploit the local structures of
shadow edges by using a structured CNN learning framework. We show that using
the structured label information in the classification can improve the local
consistency of the results and avoid spurious labelling. We further propose and
formulate a shadow/bright measure to model the complex interactions among image
regions. The shadow and bright measures of each patch are computed from the
shadow edges detected in the image. Using the global interaction constraints on
patches, we formulate a least-square optimization problem for shadow recovery
that can be solved efficiently. Our shadow recovery method achieves
state-of-the-art results on the major shadow benchmark databases collected
under various conditions.Comment: 8 pages. CVPR 201
Research on the Traffic Event Discovery in Video Surveillance
视频监控系统的广泛运用,为人们在交通管理和安全监督提供了很大的便利,然而这种便利需要耗费巨大的人力物力去干预和监督。随着科学技术的发展,视频监控系统智能化成为解决该问题的研究方向,但是目前针对异常事件发现的视频监控系统智能化仍不足以满足人们的需求。本文在实验室前课题组研究智能视频监控技术的基础上,研究道路交通事件检测技术并构建了一个道路交通事件检测系统。本文的主要工作如下: (1)介绍视频处理中比较常用的运动目标检测方法并在不同场景下对检测效果进行比较,采用了效果较好的混合高斯模型。在阴影检测算法中,通过统计阴影区域像素在的变化用高斯分布进行建模,从而根据概率大小完成对阴影像素的判断。在对运...Video surveillance systems that are used widely can provide people with a great convenience in traffic management and safety oversight, however, this convenience takes enormous human and material resources to intervene and supervise. With the development of science and technology, intelligent video surveillance system is a good solution to solve that problem, but the intelligent video surveillance...学位:工学硕士院系专业:信息科学与技术学院_计算机科学与技术学号:2302013115315
Shadow Detection in Aerial Images using Machine Learning
Shadows are present in a wide range of aerial images from forested scenes to urban environments. The presence of shadows degrades the performance of computer vision algorithms in a diverse set of applications such as image registration, object segmentation, object detection and recognition. Therefore, detection and mitigation of shadows is of paramount importance and can significantly improve the performance of computer vision algorithms in the aforementioned applications. There are several existing approaches to shadow detection in aerial images including chromaticity methods, texture-based methods, geometric, physics-based methods, and approaches using neural networks in machine learning.
In this thesis, we developed seven new approaches to shadow detection in aerial imagery. This includes two new chromaticity based methods (i.e., Shadow Detection using Blue Illumination (SDBI) and Edge-based Shadow Detection using Blue Illumination (Edge-SDBI) and five machine learning methods consisting of two neural networks (SDNN and DIV-NN), and three convolutional neural networks (VSKCNN, SDCNN-ver1 and SDCNN ver-2). These algorithms were applied to five different aerial imagery data sets. Results were assessed using both qualitative (visual shadow masks) and quantitative techniques. Conclusions touch upon the various trades between these approaches, including speed, training, accuracy, completeness, correctness and quality
A new strategy of detecting traffic information based on traffic camera : modified inverse perspective mapping
The development of Intelligent Transportation Systems (ITS) needs high quality traffic information such as intersections, but conventional image-based traffic detection methods have difficulties with perspective and background noise, shadows and lighting transitions. In this paper, we propose a new traffic information detection method based on Modified Inverse Perspective Mapping (MIPM) to perform under these challenging conditions. In our proposed method, first the perspective is removed from the images using the Modified Inverse Perspective Mapping (MIPM); afterward, Hough transform is applied to extract structural information like road lines and lanes; then, Gaussian Mixture Models are used to generate the binary image. Meanwhile, to tackle shadow effect in car areas, we have applied a chromacity-base strategy. To evaluate the performance of the proposed method, we used several video sequences as benchmarks. These videos are captured in normal weather from a high way, and contain different types of locations and occlusions between cars. Our simulation results indicate that the proposed algorithms and frameworks are effective, robust and more accurate compared to other frameworks, especially in facing different kinds of occlusions
Assessment of Different Methods for Shadow Detection in High-Resolution Optical Imagery and Evaluation of Shadow Impact on Calculation of NDVI and Evapotranspiration
Significant efforts have been made recently in the application of high-resolution remote sensing imagery (i.e., sub-meter) captured by unmanned aerial vehicles (UAVs) for precision agricultural applications for high-value crops such as wine grapes. However, at such high resolution, shadows will appear in the optical imagery effectively reducing the reflectance and emission signal received by imaging sensors. To date, research that evaluates procedures to identify the occurrence of shadows in imagery produced by UAVs is limited. In this study, the performance of four different shadow detection methods used in satellite imagery was evaluated for high-resolution UAV imagery collected over a California vineyard during the Grape Remote sensing Atmospheric Profile and Evapotranspiration eXperiment (GRAPEX) field campaigns. The performance of the shadow detection methods was compared and impacts of shadowed areas on the normalized difference vegetation index (NDVI) and estimated evapotranspiration (ET) using the Two-Source Energy Balance (TSEB) model are presented. The results indicated that two of the shadow detection methods, the supervised classification and index-based methods, had better performance than two other methods. Furthermore, assessment of shadowed pixels in the vine canopy led to significant differences in the calculated NDVI and ET in areas affected by shadows in the high-resolution imagery. Shadows are shown to have the greatest impact on modeled soil heat flux, while net radiation and sensible heat flux are less affected. Shadows also have an impact on the modeled Bowen ratio (ratio of sensible to latent heat) which can be used as an indicator of vine stress level
A statistical approach for shadow detection using spatio-temporal contexts
Background subtraction is an important step used to segment moving regions in surveillance videos. However, cast shadows are often falsely labeled as foreground objects, which may severely degrade the accuracy of object localization and detection. Effective shadow detection is necessary for accurate foreground segmentation, especially for outdoor scenes. Based on the characteristics of shadows, such as luminance reduction, chromaticity consistency and texture consistency, we introduce a nonparametric framework for modeling surface behavior under cast shadows. To each pixel, we assign a potential shadow value with a confidence weight, indicating the probability that the pixel location is an actual shadow point. Given an observed RGB value for a pixel in a new frame, we use its recent spatio-temporal context to compute an expected shadow RGB value. The similarity between the observed and the expected shadow RGB values determines whether a pixel position is a true shadow. Experimental results show the performance of the proposed method on a suite of standard indoor and outdoor video sequences
Removing shadows from video
This paper presents a novel approach to automatic
shadow identification and removal from video input. Based on
the observation that the length and position of a shadow
changes linearly over a relatively long period in outdoor
environments, due to the relative movement of the sun, we can
distinguish a shadow from other dark regions in an input video.
Subsequently, we can identify the Reference Shadow as that
with the highest confidence of the aforementioned linear
changes. This Reference Shadow is used to fit the shadow-free
invariant model, with which the shadow-free invariant images
can be computed for all frames in the input video. Our method
does not require camera calibration and shadows from
stationary objects, as moving objects are detected
automatically
Interactive removal and ground truth for difficult shadow scenes
A user-centric method for fast, interactive, robust, and high-quality shadow removal is presented. Our algorithm can perform detection and removal in a range of difficult cases, such as highly textured and colored shadows. To perform detection, an on-the-fly learning approach is adopted guided by two rough user inputs for the pixels of the shadow and the lit area. After detection, shadow removal is performed by registering the penumbra to a normalized frame, which allows us efficient estimation of nonuniform shadow illumination changes, resulting in accurate and robust removal. Another major contribution of this work is the first validated and multiscene category ground truth for shadow removal algorithms. This data set containing 186 images eliminates inconsistencies between shadow and shadow-free images and provides a range of different shadow types such as soft, textured, colored, and broken shadow. Using this data, the most thorough comparison of state-of-the-art shadow removal methods to date is performed, showing our proposed algorithm to outperform the state of the art across several measures and shadow categories. To complement our data set, an online shadow removal benchmark website is also presented to encourage future open comparisons in this challenging field of research