326 research outputs found

    LudVision Remote Detection of Exotic Invasive Aquatic Floral Species using Data from a DroneMounted Multispectral Sensor

    Get PDF
    Remote sensing is the process of detecting and monitoring the physical characteristics of an area by measuring it’s reflected and emitted radiation at a distance. It is being broadly used to monitor ecosystems, mainly for their preservation. There have been ever­growing reports of invasive species affecting the natural balance of ecosystems. Exotic invasive species have a critical impact when introduced into new ecosystems and may lead to the extinction of native species. In this study, we focus on Ludwigia peploides, considered by the European Union as an aquatic invasive species. Its presence can have negative impacts on the surrounding ecosystem and human activities such as agriculture, fishing, and navigation. Our goal was to develop a method to identify the presence of the species. To achieve this, we used images collected by a drone­mounted multispectral sensor. Due to the lack of publicly available data sets containing Ludwigia peploides, we had to create our own data set. We started by carefully studying all the available options. We first experimented with satellite images, but it was impossible to identify the targeted species due to their low resolution. Thus, we decided to use a drone­mounted multispectral sensor. Unfortunately, due to budget limitations, we could not acquire the highly specialized types of equipment that is more commonly used in remote sensing. However, we were confident that our setup would be enough to extract the species’ spectral signature, and that the higher resolution compared to satellites would allow us to use deep learning models to identify the species. The use of the drone allowed for better operational flexibility and to cover a large area. The multispectral sensor allowed us to leverage the information of two additional bands outside the visible spectrum. After visiting the study site multiple times and capturing data at various times of the day, we created a representative data set with different atmospheric conditions. After the data collection, we proceeded to the pre­processing and annotation steps to have a usable data set. In later stages, we proved that extracting the specie’s spectral signature from our data set is possible. This was a significant conclusion, as it proved that it is indeed possible to differentiate the species’ spectral signature with equipment that is not as advanced and specialized as the ones used in other studies. After having a data set, we focused on the next step, which was to develop and validate a method that would be able to identify Ludwigia p on our data. We decided on using semantic segmentation models to identify the species. Given that we only have two additional bands compared to traditional RGB images, we could not approach the problem as a standard remote sensing spectroscopy problem. By using semantic segmentation models, we can leverage both the capabilities of these models to recognize objects and the multispectral nature of our data. Fundamentally, the model has the same behavior as usual but has access to the information of two additional bands.We started by using an existing state­of­the­art semantic segmentation model adapted to handle our data. After doing some initial tests and establishing a baseline, we proposed and implemented some changes to the existing model. The goal of the modifications was to create a model with lower training times and better performance in detecting Ludwigia p. at high altitudes. The result is a new model better suited to our data and application. Our model is faster when it comes to training time while maintaining similar performance and has a slight performance increase in high­altitude images.O sensoriamento remoto é o processo de detetar e monitorizar as características físicas de uma área, medindo à distância a sua radiação refletida e emitida. É amplamente utilizado para monitorizar ecossistemas, principalmente tendo em vista a sua preservação. Há cada vez mais casos de espécies invasoras que afetam o equilíbrio natural dos ecossistemas. As espécies exóticas invasoras têm um impacto crítico quando introduzidas em novos ecossistemas e podem levar à extinção de espécies nativas. Neste estudo, focamo­nos na Ludwigia peploides, considerada pela União Europeia como uma espécie aquática invasora. A sua presença pode ter impactos negativos no ecossistema circundante e nas atividades humanas, como agricultura, pesca e navegação. O nosso objetivo foi desenvolver um método para identificar a presença da espécie. Para isso, usámos imagens capturadas por um sensor multiespectral montado num drone. Devido à falta de conjuntos de dados disponíveis publicamente contendo Ludwigia peploides, tivemos que criar nosso próprio conjunto de dados. Começámos por cuidadosamente estudar todas as opções disponíveis. Primeiro fizemos experiências com imagens de satélite, mas foi impossível identificar a espécie­alvo devido à baixa resolução das imagens. Assim, decidimos usar um sensor multiespectral montado num drone. Infelizmente, devido a limitações orçamentais, não conseguimos adquirir os tipos de equipamentos altamente especializados que são tipicamente usados em sensoriamento remoto. No entanto, estávamos confiantes de que nossa configuração seria suficiente para extrair a assinatura espectral da espécie, e que a alta resolução das nossas imagens comparadas com de satélite, nos permitiria usar modelos de aprendizagem profunda para identificar as espécies. O uso do drone permitiu uma maior flexibilidade operacional e cobertura de uma grande área. O sensor multiespectral permitiu­nos alavancar as informações de duas bandas adicionais fora do espectro visível. Depois de visitar o local de estudo várias vezes e capturar dados em vários momentos do dia, criámos um conjunto de dados representativo com diferentes condições atmosféricas. Após a captura de dados, procedeu­se às etapas de pré­processamento e anotação para ter um conjunto de dados utilizável. Em etapas posteriores, provámos que é possível extrair dos nossos dados a assinatura espectral da espécie. Esta foi uma conclusão significativa, pois comprovou que de fato é possível diferenciar a assinatura espectral da espécie com equipamentos não tão avançados e especializados quanto os utilizados noutros estudos. Depois de termos um conjunto de dados, focamo­nos no próximo passo, que foi desenvolver e validar um método que fosse capaz de identificar Ludwigia p. nos nossos dados. Decidimos usar modelos de segmentação semântica para identificar as espécies. Dado que temos apenas duas bandas adicionais em comparação com as imagens RGB tradicionais, não poderíamos abordar o problema como um problema de espectroscopia de sensoriamento remoto padrão. Ao usar modelos de segmentação semântica, podemos aproveitar não só os recursos desses modelos para reconhecer objetos, mas também a natureza multiespectral de nossos dados. Fundamentalmente, o modelo tem o mesmo comportamento usual, mas tem acesso às informações de duas bandas adicionais. Começamos por usar um modelo de segmentação semântica estado­da­arte existente, que foi adaptado para lidar com nossos dados. Depois de fazer alguns testes iniciais e estabelecer uma base de comparação, propusemos e implementámos algumas modificações ao modelo existente. O objetivo das modificações foi criar um modelo com menores tempos de treino e melhor desempenho na deteção de Ludwigia p. em altitudes elevadas. O resultado é um novo modelo mais adequado aos nossos dados e aplicação. O nosso modelo é mais rápido no que diz respeito ao tempo de treino, mantendo desempenho semelhante, apresentando mesmo um ligeiro aumento de desempenho em imagens de alta altitude

    Multiscale spatial-spectral convolutional network with image-based framework for hyperspectral imagery classification.

    Get PDF
    Jointly using spatial and spectral information has been widely applied to hyperspectral image (HSI) classification. Especially, convolutional neural networks (CNN) have gained attention in recent years due to their detailed representation of features. However, most of CNN-based HSI classification methods mainly use patches as input classifier. This limits the range of use for spatial neighbor information and reduces processing efficiency in training and testing. To overcome this problem, we propose an image-based classification framework that is efficient and straight forward. Based on this framework, we propose a multiscale spatial-spectral CNN for HSIs (HyMSCN) to integrate both multiple receptive fields fused features and multiscale spatial features at different levels. The fused features are exploited using a lightweight block called the multiple receptive field feature block (MRFF), which contains various types of dilation convolution. By fusing multiple receptive field features and multiscale spatial features, the HyMSCN has comprehensive feature representation for classification. Experimental results from three real hyperspectral images prove the efficiency of the proposed framework. The proposed method also achieves superior performance for HSI classification

    Neural Architecture Search for Image Segmentation and Classification

    Get PDF
    Deep learning (DL) is a class of machine learning algorithms that relies on deep neural networks (DNNs) for computations. Unlike traditional machine learning algorithms, DL can learn from raw data directly and effectively. Hence, DL has been successfully applied to tackle many real-world problems. When applying DL to a given problem, the primary task is designing the optimum DNN. This task relies heavily on human expertise, is time-consuming, and requires many trial-and-error experiments. This thesis aims to automate the laborious task of designing the optimum DNN by exploring the neural architecture search (NAS) approach. Here, we propose two new NAS algorithms for two real-world problems: pedestrian lane detection for assistive navigation and hyperspectral image segmentation for biosecurity scanning. Additionally, we also introduce a new dataset-agnostic predictor of neural network performance, which can be used to speed-up NAS algorithms that require the evaluation of candidate DNNs

    On the Application of Data Clustering Algorithm used in Information Retrieval for Satellite Imagery Segmentation

    Get PDF
    This study proposes an automated technique for segmenting satellite imagery using unsupervised learning. Autoencoders, a type of neural network, are employed for dimensionality reduction and feature extraction. The study evaluates different segmentation architectures and encoders and identifies the best performing combination as the DeepLabv3+ architecture with a ResNet-152 encoder. This approach achieves high performance scores across multiple metrics and can be beneficial in various fields, including agriculture, land use monitoring, and disaster response

    DPSA: Dense pixelwise spatial attention network for hatching egg fertility detection

    Get PDF
    © 2020 SPIE and IS & T. Deep convolutional neural networks show a good prospect in the fertility detection and classification of specific pathogen-free hatching egg embryos in the production of avian influenza vaccine, and our previous work has mainly investigated three factors of networks to push performance: depth, width, and cardinality. However, an important problem that feeble embryos with weak blood vessels interfering with the classification of resilient fertile ones remains. Inspired by fine-grained classification, we introduce the attention mechanism into our model by proposing a dense pixelwise spatial attention module combined with the existing channel attention through depthwise separable convolutions to further enhance the network class-discriminative ability. In our fused attention module, depthwise convolutions are used for channel-specific features learning, and dilated convolutions with different sampling rates are adopted to capture spatial multiscale context and preserve rich detail, which can maintain high resolution and increase receptive fields simultaneously. The attention mask with strong semantic information generated by aggregating outputs of the spatial pyramid dilated convolution is broadcasted to low-level features via elementwise multiplications, serving as a feature selector to emphasize informative features and suppress less useful ones. A series of experiments conducted on our hatching egg dataset show that our attention network achieves a lower misjudgment rate on weak embryos and a more stable accuracy, which is up to 98.3% and 99.1% on 5-day and 9-day old eggs, respectively

    Farm Area Segmentation in Satellite Images Using DeepLabv3+ Neural Networks

    Get PDF
    Farm detection using low resolution satellite images is an important part of digital agriculture applications such as crop yield monitoring. However, it has not received enough attention compared to high-resolution images. Although high resolution images are more efficient for detection of land cover components, the analysis of low-resolution images are yet important due to the low-resolution repositories of the past satellite images used for timeseries analysis, free availability and economic concerns. In this paper, semantic segmentation of farm areas is addressed using low resolution satellite images. The segmentation is performed in two stages; First, local patches or Regions of Interest (ROI) that include farm areas are detected. Next, deep semantic segmentation strategies are employed to detect the farm pixels. For patch classification, two previously developed local patch classification strategies are employed; a two-step semi-supervised methodology using hand-crafted features and Support Vector Machine (SVM) modelling and transfer learning using the pretrained Convolutional Neural Networks (CNNs). For the latter, the high-level features learnt from the massive filter banks of deep Visual Geometry Group Network (VGG-16) are utilized. After classifying the image patches that contain farm areas, the DeepLabv3+ model is used for semantic segmentation of farm pixels. Four different pretrained networks, resnet18, resnet50, resnet101 and mobilenetv2, are used to transfer their learnt features for the new farm segmentation problem. The first step results show the superiority of the transfer learning compared to hand-crafted features for classification of patches. The second step results show that the model trained based on resnet50 achieved the highest semantic segmentation accuracy.acceptedVersionPeer reviewe

    Land cover and forest segmentation using deep neural networks

    Get PDF
    Tiivistelmä. Land Use and Land Cover (LULC) information is important for a variety of applications notably ones related to forestry. The segmentation of remotely sensed images has attracted various research subjects. However this is no easy task, with various challenges to face including the complexity of satellite images, the difficulty to get hold of them, and lack of ready datasets. It has become clear that trying to classify on multiple classes requires more elaborate methods such as Deep Learning (DL). Deep Neural Networks (DNNs) have a promising potential to be a good candidate for the task. However DNNs require a huge amount of data to train including the Ground Truth (GT) data. In this thesis a DL pixel-based approach backed by the state of the art semantic segmentation methods is followed to tackle the problem of LULC mapping. The DNN used is based on DeepLabv3 network with an encoder-decoder architecture. To tackle the issue of lack of data the Sentinel-2 satellite whose data is provided for free by Copernicus was used with the GT mapping from Corine Land Cover (CLC) provided by Copernicus and modified by Tyke to a higher resolution. From the multispectral images in Sentinel-2 Red Green Blue (RGB), and Near Infra Red (NIR) channels were extracted, the 4th channel being extremely useful in the detection of vegetation. This ended up achieving quite good accuracy on a DNN based on ResNet-50 which was calculated using the Mean Intersection over Union (MIoU) metric reaching 0.53MIoU. It was possible to use this data to transfer the learning to a data from Pleiades-1 satellite with much better resolution, Very High Resolution (VHR) in fact. The results were excellent especially when compared on training right away on that data reaching an accuracy of 0.98 and 0.85MIoU

    Learning Spectral-Spatial-Temporal Features via a Recurrent Convolutional Neural Network for Change Detection in Multispectral Imagery

    Full text link
    Change detection is one of the central problems in earth observation and was extensively investigated over recent decades. In this paper, we propose a novel recurrent convolutional neural network (ReCNN) architecture, which is trained to learn a joint spectral-spatial-temporal feature representation in a unified framework for change detection in multispectral images. To this end, we bring together a convolutional neural network (CNN) and a recurrent neural network (RNN) into one end-to-end network. The former is able to generate rich spectral-spatial feature representations, while the latter effectively analyzes temporal dependency in bi-temporal images. In comparison with previous approaches to change detection, the proposed network architecture possesses three distinctive properties: 1) It is end-to-end trainable, in contrast to most existing methods whose components are separately trained or computed; 2) it naturally harnesses spatial information that has been proven to be beneficial to change detection task; 3) it is capable of adaptively learning the temporal dependency between multitemporal images, unlike most of algorithms that use fairly simple operation like image differencing or stacking. As far as we know, this is the first time that a recurrent convolutional network architecture has been proposed for multitemporal remote sensing image analysis. The proposed network is validated on real multispectral data sets. Both visual and quantitative analysis of experimental results demonstrates competitive performance in the proposed mode
    corecore