102 research outputs found

    Fast Shadow Detection from a Single Image Using a Patched Convolutional Neural Network

    Full text link
    In recent years, various shadow detection methods from a single image have been proposed and used in vision systems; however, most of them are not appropriate for the robotic applications due to the expensive time complexity. This paper introduces a fast shadow detection method using a deep learning framework, with a time cost that is appropriate for robotic applications. In our solution, we first obtain a shadow prior map with the help of multi-class support vector machine using statistical features. Then, we use a semantic- aware patch-level Convolutional Neural Network that efficiently trains on shadow examples by combining the original image and the shadow prior map. Experiments on benchmark datasets demonstrate the proposed method significantly decreases the time complexity of shadow detection, by one or two orders of magnitude compared with state-of-the-art methods, without losing accuracy.Comment: 6 pages, 5 figures, Submitted to IROS 201

    Direction-aware Spatial Context Features for Shadow Detection

    Full text link
    Shadow detection is a fundamental and challenging task, since it requires an understanding of global image semantics and there are various backgrounds around shadows. This paper presents a novel network for shadow detection by analyzing image context in a direction-aware manner. To achieve this, we first formulate the direction-aware attention mechanism in a spatial recurrent neural network (RNN) by introducing attention weights when aggregating spatial context features in the RNN. By learning these weights through training, we can recover direction-aware spatial context (DSC) for detecting shadows. This design is developed into the DSC module and embedded in a CNN to learn DSC features at different levels. Moreover, a weighted cross entropy loss is designed to make the training more effective. We employ two common shadow detection benchmark datasets and perform various experiments to evaluate our network. Experimental results show that our network outperforms state-of-the-art methods and achieves 97% accuracy and 38% reduction on balance error rate.Comment: Accepted for oral presentation in CVPR 2018. The journal version of this paper is arXiv:1805.0463

    Shadow Detection in Aerial Images using Machine Learning

    Get PDF
    Shadows are present in a wide range of aerial images from forested scenes to urban environments. The presence of shadows degrades the performance of computer vision algorithms in a diverse set of applications such as image registration, object segmentation, object detection and recognition. Therefore, detection and mitigation of shadows is of paramount importance and can significantly improve the performance of computer vision algorithms in the aforementioned applications. There are several existing approaches to shadow detection in aerial images including chromaticity methods, texture-based methods, geometric, physics-based methods, and approaches using neural networks in machine learning. In this thesis, we developed seven new approaches to shadow detection in aerial imagery. This includes two new chromaticity based methods (i.e., Shadow Detection using Blue Illumination (SDBI) and Edge-based Shadow Detection using Blue Illumination (Edge-SDBI) and five machine learning methods consisting of two neural networks (SDNN and DIV-NN), and three convolutional neural networks (VSKCNN, SDCNN-ver1 and SDCNN ver-2). These algorithms were applied to five different aerial imagery data sets. Results were assessed using both qualitative (visual shadow masks) and quantitative techniques. Conclusions touch upon the various trades between these approaches, including speed, training, accuracy, completeness, correctness and quality

    A Neural Network for Interpolating Light-Sources

    Get PDF
    This study combines two novel deterministic methods with a Convolutional Neural Network to develop a machine learning method that is aware of directionality of light in images. The first method detects shadows in terrestrial images by using a sliding-window algorithm that extracts specific hue and value features in an image. The second method interpolates light-sources by utilising a line-algorithm, which detects the direction of light sources in the image. Both of these methods are single-image solutions and employ deterministic methods to calculate the values from the image alone, without the need for illumination-models. They extract real-time geometry from the light source in an image, rather than mapping an illumination-model onto the image, which are the only models used today. Finally, those outputs are used to train a Convolutional Neural Network. This displays greater accuracy than previous methods for shadow detection and can predict light source-direction and thus orientation accurately, which is a considerable innovation for an unsupervised CNN. It is significantly faster than the deterministic methods. We also present a reference dataset for the problem of shadow and light direction detection. © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works

    AdapterShadow: Adapting Segment Anything Model for Shadow Detection

    Full text link
    Segment anything model (SAM) has shown its spectacular performance in segmenting universal objects, especially when elaborate prompts are provided. However, the drawback of SAM is twofold. On the first hand, it fails to segment specific targets, e.g., shadow images or lesions in medical images. On the other hand, manually specifying prompts is extremely time-consuming. To overcome the problems, we propose AdapterShadow, which adapts SAM model for shadow detection. To adapt SAM for shadow images, trainable adapters are inserted into the frozen image encoder of SAM, since the training of the full SAM model is both time and memory consuming. Moreover, we introduce a novel grid sampling method to generate dense point prompts, which helps to automatically segment shadows without any manual interventions. Extensive experiments are conducted on four widely used benchmark datasets to demonstrate the superior performance of our proposed method. Codes will are publicly available at https://github.com/LeipingJie/AdapterShadow

    Road Quality Classification

    Get PDF
    Automatické vyhodnocování kvality vozovky může být užitečné jak správním orgánům, tak i těm účastníkům silničního provozu, kteří vyhledávají vozovky s kvalitním povrchem pro co největší potěšení z jízdy. Tato práce se zabývá návrhem modelu, který klasifikuje obrázky silnic do pěti kvalitativních kategorií na základě jejich celkového vzhledu. V práci prezentujeme nový ručně anotovaný dataset, obsahující fotografie ze služby Google Street View. Anotace datasetu byla navržena pro motorkáře, ale může být použita i pro jiné účastníky silničního provozu. Experimentovali jsme jak s předučenými konvolučními neuronovými sítěmi, jako jsou MobileNet či DenseNet, tak s vlastními architekturami konvolučních neuronových sítí. Dále jsme vyzkoušeli různé techniky předzpracování dat, např. odstraňování stínů či kontrastně-limitní adaptabilní histogramovou ekvalizací (CLAHE). Námi navrhovaný klasifikační model využívá CLAHE a na testovací sadě dosahuje 71% přesnosti. Vizuální kontrola ukázala, že navrhovaný model je i s touto přesností využitelný za účelem, pro který byl navržen.Automated evaluation of road quality can be helpful to authorities and also road users who seek high-quality roads to maximize their driving pleasure. This thesis proposes a model which classifies road images into five qualitative categories based on overall appearance. We present a new manually annotated dataset, collected from Google Street View. The dataset classes were designed for motorcyclists, but they are also applicable to other road users. We experimented with Convolutions Neural Networks, involving custom architectures and pre-trained networks, such as MobileNet or DenseNet. Also, many experiments with preprocessing methods such as shadow removal or CLAHE. Our proposed classification model uses CLAHE and achieves 71% accuracy on a test set. A visual check showed the model is applicable for its designed purpose despite the modest accuracy since the image data are often controversial and hard to label even for humans
    corecore