109 research outputs found

    Detecção de linha de plantio de cana de açúcar a partir de imagens de VANT usando Segmentação Semântica e Transformada de Radon

    Get PDF
    In recent years, UAVs (Unmanned Aerial Vehicles) have become increasingly popular in the agricultural sector, promoting and enabling the application of aerial image monitoring in both scientific and business contexts. Images captured by UAVs are fundamental for precision farming practices, as they allow activities that deal with low and medium altitude images. After the effective sowing, the scenario of the planted area may change drastically over time due to the appearance of erosion, gaps, death and drying of part of the crop, animal interventions, etc. Thus, the process of detecting the crop rows is strongly important for planning the harvest, estimating the use of inputs, control of costs of production, plant stand counts, early correction of sowing failures, more-efficient watering, etc. In addition, the geolocation information of the detected lines allows the use of autonomous machinery and a better application of inputs, reducing financial costs and the aggression to the environment. In this work we address the problem of detection and segmentation of sugarcane crop lines using UAV imagery. First, we experimented an approach based on \ac{GA} associated with Otsu method to produce binarized images. Then, due to some reasons including the recent relevance of Semantic Segmentation in the literature, its levels of abstraction, and the non-feasible results of Otsu associated with \ac{GA}, we proposed a new approach based on \ac{SSN} divided in two steps. First, we use a Convolutional Neural Network (CNN) to automatically segment the images, classifying their regions as crop lines or as non-planted soil. Then, we use the Radon transform to reconstruct and improve the already segmented lines, making them more uniform or grouping fragments of lines and loose plants belonging to the same planting line. We compare our results with segmentation performed manually by experts and the results demonstrate the efficiency and feasibility of our approach to the proposed task.Dissertação (Mestrado)Nos últimos anos, os VANTs (Veículos Aéreos Não Tripulados) têm se tornado cada vez mais populares no setor agrícola, promovendo e possibilitando o monitoramento de imagens aéreas tanto no contexto científico, quanto no de negócios. Imagens capturadas por VANTs são fundamentais para práticas de agricultura de precisão, pois permitem a realização de atividades que lidam com imagens de baixa ou média altitude. O cenário da área plantada pode mudar drasticamente ao longo do tempo devido ao aparecimento de erosões, falhas de plantio, morte e ressecamento de parte da cultura, intervenções de animais, etc. Assim, o processo de detecção das linhas de plantio é de grande importância para o planejamento da colheita, controle de custos de produção, contagem de plantas, correção de falhas de semeadura, irrigação eficiente, entre outros. Além disso, a informação de geolocalização das linhas detectadas permite o uso de maquinários autônomos e um melhor planejamento de aplicação de insumos, reduzindo custos e a agressão ao meio ambiente. Neste trabalho, abordamos o problema de segmentação e detecção de linhas de plantio de cana-de-açúcar em imagens de VANTs. Primeiro, experimentamos uma abordagem baseada em Algoritmo Genético (AG) e Otsu para produzir imagens binarizadas. Posteriormente, devido a alguns motivos, incluindo a relevância recente da Segmentação Semântica, seus níveis de abstração e os resultados inviáveis obtidos com AG, estudamos e propusemos uma nova abordagem baseada em \ac{SSN} em duas etapas. Primeiro, usamos uma \ac{SSN} para segmentar as imagens, classificando suas regiões como linhas de plantio ou como solo não plantado. Em seguida, utilizamos a transformada de Radon para reconstruir e melhorar as linhas já segmentadas, tornando-as mais uniformes ou agrupando fragmentos de linhas e plantas soltas. Comparamos nossos resultados com segmentações feitas manualmente por especialistas e os resultados demonstram a eficiência e a viabilidade de nossa abordagem para a tarefa proposta

    Image forgery detection using textural features and deep learning

    Full text link
    La croissance exponentielle et les progrès de la technologie ont rendu très pratique le partage de données visuelles, d'images et de données vidéo par le biais d’une vaste prépondérance de platesformes disponibles. Avec le développement rapide des technologies Internet et multimédia, l’efficacité de la gestion et du stockage, la rapidité de transmission et de partage, l'analyse en temps réel et le traitement des ressources multimédias numériques sont progressivement devenus un élément indispensable du travail et de la vie de nombreuses personnes. Sans aucun doute, une telle croissance technologique a rendu le forgeage de données visuelles relativement facile et réaliste sans laisser de traces évidentes. L'abus de ces données falsifiées peut tromper le public et répandre la désinformation parmi les masses. Compte tenu des faits mentionnés ci-dessus, la criminalistique des images doit être utilisée pour authentifier et maintenir l'intégrité des données visuelles. Pour cela, nous proposons une technique de détection passive de falsification d'images basée sur les incohérences de texture et de bruit introduites dans une image du fait de l'opération de falsification. De plus, le réseau de détection de falsification d'images (IFD-Net) proposé utilise une architecture basée sur un réseau de neurones à convolution (CNN) pour classer les images comme falsifiées ou vierges. Les motifs résiduels de texture et de bruit sont extraits des images à l'aide du motif binaire local (LBP) et du modèle Noiseprint. Les images classées comme forgées sont ensuite utilisées pour mener des expériences afin d'analyser les difficultés de localisation des pièces forgées dans ces images à l'aide de différents modèles de segmentation d'apprentissage en profondeur. Les résultats expérimentaux montrent que l'IFD-Net fonctionne comme les autres méthodes de détection de falsification d'images sur l'ensemble de données CASIA v2.0. Les résultats discutent également des raisons des difficultés de segmentation des régions forgées dans les images du jeu de données CASIA v2.0.The exponential growth and advancement of technology have made it quite convenient for people to share visual data, imagery, and video data through a vast preponderance of available platforms. With the rapid development of Internet and multimedia technologies, performing efficient storage and management, fast transmission and sharing, real-time analysis, and processing of digital media resources has gradually become an indispensable part of many people’s work and life. Undoubtedly such technological growth has made forging visual data relatively easy and realistic without leaving any obvious visual clues. Abuse of such tampered data can deceive the public and spread misinformation amongst the masses. Considering the facts mentioned above, image forensics must be used to authenticate and maintain the integrity of visual data. For this purpose, we propose a passive image forgery detection technique based on textural and noise inconsistencies introduced in an image because of the tampering operation. Moreover, the proposed Image Forgery Detection Network (IFD-Net) uses a Convolution Neural Network (CNN) based architecture to classify the images as forged or pristine. The textural and noise residual patterns are extracted from the images using Local Binary Pattern (LBP) and the Noiseprint model. The images classified as forged are then utilized to conduct experiments to analyze the difficulties in localizing the forged parts in these images using different deep learning segmentation models. Experimental results show that both the IFD-Net perform like other image forgery detection methods on the CASIA v2.0 dataset. The results also discuss the reasons behind the difficulties in segmenting the forged regions in the images of the CASIA v2.0 dataset

    Semantic Segmentation based deep learning approaches for weed detection

    Get PDF
    Global increase in herbicide use to control weeds has led to issues such as evolution of herbicide-resistant weeds, off-target herbicide movement, etc. Precision agriculture advocates Site Specific Weed Management (SSWM) application to achieve precise and right amount of herbicide spray and reduce off-target herbicide movement. Recent advancements in Deep Learning (DL) have opened possibilities for adaptive and accurate weed recognitions for field based SSWM applications with traditional and emerging spraying equipment; however, challenges exist in identifying the DL model structure and train the model appropriately for accurate and rapid model applications over varying crop/weed growth stages and environment. In our study, an encoder-decoder based DL architecture was proposed that performs pixel-wise Semantic Segmentation (SS) classifications of crop, soil, and weed patches in the fields. The objective of this study was to develop a robust weed detection algorithm using DL techniques that can accurately and reliably locate weed infestations in low altitude Unmanned Aerial Vehicle (UAV) imagery with acceptable application speed. Two different encoder-decoder based SS models of LinkNet and UNet were developed using transfer learning techniques. We performed various measures such as backpropagation optimization and refining of the dataset used for training to address the class-imbalance problem which is a common issue in developing weed detection models. It was found that LinkNet model with ResNet18 as the encoder section and use of ‘Focal loss’ loss function was able to achieve the highest mean and class-wise Intersection over Union scores for different class categories while performing predictions on unseen dataset. The developed state-of-art model did not require a large amount of data during training and the techniques used to develop the model in our study provides a propitious opportunity that performs better than the existing SS based weed detections models. The proposed model integrates a futuristic approach to develop a model that could be used for weed detection on aerial imagery from UAV and perform real-time SSWM applications Advisor: Yeyin Sh

    Automated High-resolution Earth Observation Image Interpretation: Outcome of the 2020 Gaofen Challenge

    Get PDF
    In this article, we introduce the 2020 Gaofen Challenge and relevant scientific outcomes. The 2020 Gaofen Challenge is an international competition, which is organized by the China High-Resolution Earth Observation Conference Committee and the Aerospace Information Research Institute, Chinese Academy of Sciences and technically cosponsored by the IEEE Geoscience and Remote Sensing Society and the International Society for Photogrammetry and Remote Sensing. It aims at promoting the academic development of automated high-resolution earth observation image interpretation. Six independent tracks have been organized in this challenge, which cover the challenging problems in the field of object detection and semantic segmentation. With the development of convolutional neural networks, deep-learning-based methods have achieved good performance on image interpretation. In this article, we report the details and the best-performing methods presented so far in the scope of this challenge

    Two-layer ensemble of deep learning models for medical image segmentation. [Article]

    Get PDF
    One of the most important areas in medical image analysis is segmentation, in which raw image data is partitioned into structured and meaningful regions to gain further insights. By using Deep Neural Networks (DNN), AI-based automated segmentation algorithms can potentially assist physicians with more effective imaging-based diagnoses. However, since it is difficult to acquire high-quality ground truths for medical images and DNN hyperparameters require significant manual tuning, the results by DNN-based medical models might be limited. A potential solution is to combine multiple DNN models using ensemble learning. We propose a two-layer ensemble of deep learning models in which the prediction of each training image pixel made by each model in the first layer is used as the augmented data of the training image for the second layer of the ensemble. The prediction of the second layer is then combined by using a weight-based scheme which is found by solving linear regression problems. To the best of our knowledge, our paper is the first work which proposes a two-layer ensemble of deep learning models with an augmented data technique in medical image segmentation. Experiments conducted on five different medical image datasets for diverse segmentation tasks show that proposed method achieves better results in terms of several performance metrics compared to some well-known benchmark algorithms. Our proposed two-layer ensemble of deep learning models for segmentation of medical images shows effectiveness compared to several benchmark algorithms. The research can be expanded in several directions like image classification

    Developing deep learning methods for aquaculture applications

    Get PDF
    Alzayat Saleh developed a computer vision framework that can aid aquaculture experts in analyzing fish habitats. In particular, he developed a labelling efficient method of training a CNN-based fish-detector and also developed a model that estimates the fish weight directly from its image
    • …
    corecore