17 research outputs found

    Development of Neural Network Based Adaptive Change Detection Technique for Land Terrain Monitoring with Satellite and Drone Images

    Get PDF
    Role of satellite images is increasing in day-to-day life for both civil as well as defence applications. One of the major defence application while troop’s movement is to know about the behaviour of the terrain in advance by which smooth transportation of the troops can be made possible. Therefore, it is important to identify the terrain in advance which is quite possible with the use of satellite images. However, to achieve accurate results, it is essential that the data used should be precise and quite reliable. To achieve this with a satellite image alone is a challenging task. Therefore, in this paper an attempt has been made to fuse the images obtained from drone and satellite, to achieve precise terrain information like bare land, dense vegetation and sparse vegetation. For this purpose, a test area nearby Roorkee, Uttarakhand, India has been selected, and drone and Sentinel-2 data have been taken for the same dates. A neural network based technique has been proposed to obtain precise terrain information from the Sentinel-2 image. A quantitative analysis was carried out to know the terrain information by using change detection. It is observed that the proposed technique has a good potential to identify precisely bare land, dense vegetation, and sparse vegetation which may be quite useful for defence as well as civilian application

    Small-Object Detection in Remote Sensing Images with End-to-End Edge-Enhanced GAN and Object Detector Network

    Full text link
    The detection performance of small objects in remote sensing images is not satisfactory compared to large objects, especially in low-resolution and noisy images. A generative adversarial network (GAN)-based model called enhanced super-resolution GAN (ESRGAN) shows remarkable image enhancement performance, but reconstructed images miss high-frequency edge information. Therefore, object detection performance degrades for small objects on recovered noisy and low-resolution remote sensing images. Inspired by the success of edge enhanced GAN (EEGAN) and ESRGAN, we apply a new edge-enhanced super-resolution GAN (EESRGAN) to improve the image quality of remote sensing images and use different detector networks in an end-to-end manner where detector loss is backpropagated into the EESRGAN to improve the detection performance. We propose an architecture with three components: ESRGAN, Edge Enhancement Network (EEN), and Detection network. We use residual-in-residual dense blocks (RRDB) for both the ESRGAN and EEN, and for the detector network, we use the faster region-based convolutional network (FRCNN) (two-stage detector) and single-shot multi-box detector (SSD) (one stage detector). Extensive experiments on a public (car overhead with context) and a self-assembled (oil and gas storage tank) satellite dataset show superior performance of our method compared to the standalone state-of-the-art object detectors.Comment: This paper contains 27 pages and accepted for publication in MDPI remote sensing journal. GitHub Repository: https://github.com/Jakaria08/EESRGAN (Implementation

    Complete Model for Automatic Object Detection and Localisation on Aerial Images using Convolutional Neural Networks

    Get PDF
    In this paper, a novel approach for an automatic object detection and localisation on aerial images is proposed. Proposed model does not use ground control points (GCPs) and consists of three major phases. In the first phase, optimal flight route is planned in order to capture the area of interest and aerial images are acquired using unmanned aerial vehicle (UAV), followed by creating a mosaic of collected images to obtained larger field-of-view panoramic image of the area of interest and using the obtained image mosaic to create georeferenced map. The image mosaic is then also used to detect objects of interest using the approach based on convolutional neural networks

    Avaliação da detecção automatizada de defeitos em pavimentos com YOLOv3: impacto das técnicas de coleta

    Get PDF
    Este estudo envolveu o treinamento de seis redes neurais com configurações personalizadas para detectar automaticamente defeitos nos pavimentos, utilizando o framework YOLOv3. A aquisição de imagens e vídeos retratando defeitos do pavimento foi realizada utilizando smartphones e câmeras de ação, levando à organização de seis datasets distintos. Cada rede neural foi submetida a treinamento e validação com o objetivo de atingir a precisão ideal na detecção automatizada de objetos. A aplicação do YOLOv3 possibilitou a realização eficiente de levantamentos de defeitos, contribuindo para o diagnóstico da qualidade do pavimento e fornecendo subsídios para a tomada de decisão na gestão dos transportes rodoviários. Ao final da análise, constatou-se que o método de enquadramento com maior eficácia atingiu uma taxa de precisão de 98%. Os resultados demonstram a eficácia do YOLOv3 na identificação dos defeitos, ressaltando a importância das técnicas de coleta e enquadramento e contribuindo para aumentando do conhecimento existente sobre detecção automatizada de defeitos em pavimentos

    Fastaer det: Fast aerial embedded real-time detection

    Get PDF
    Automated detection of objects in aerial imagery is the basis for many applications, such as search and rescue operations, activity monitoring or mapping. However, in many cases it is beneficial to employ a detector on-board of the aerial platform in order to avoid latencies, make basic decisions within the platform and save transmission bandwidth. In this work, we address the task of designing such an on-board aerial object detector, which meets certain requirements in accuracy, inference speed and power consumption. For this, we first outline a generally applicable design process for such on-board methods and then follow this process to develop our own set of models for the task. Specifically, we first optimize a baseline model with regards to accuracy while not increasing runtime. We then propose a fast detection head to significantly improve runtime at little cost in accuracy. Finally, we discuss several aspects to consider during deployment and in the runtime environment. Our resulting four models that operate at 15, 30, 60 and 90 FPS on an embedded Jetson AGX device are published for future benchmarking and comparison by the community

    Graceful Degradation and Related Fields

    Full text link
    When machine learning models encounter data which is out of the distribution on which they were trained they have a tendency to behave poorly, most prominently over-confidence in erroneous predictions. Such behaviours will have disastrous effects on real-world machine learning systems. In this field graceful degradation refers to the optimisation of model performance as it encounters this out-of-distribution data. This work presents a definition and discussion of graceful degradation and where it can be applied in deployed visual systems. Following this a survey of relevant areas is undertaken, novelly splitting the graceful degradation problem into active and passive approaches. In passive approaches, graceful degradation is handled and achieved by the model in a self-contained manner, in active approaches the model is updated upon encountering epistemic uncertainties. This work communicates the importance of the problem and aims to prompt the development of machine learning strategies that are aware of graceful degradation
    corecore