21 research outputs found

    A review of technical factors to consider when designing neural networks for semantic segmentation of Earth Observation imagery

    Full text link
    Semantic segmentation (classification) of Earth Observation imagery is a crucial task in remote sensing. This paper presents a comprehensive review of technical factors to consider when designing neural networks for this purpose. The review focuses on Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), and transformer models, discussing prominent design patterns for these ANN families and their implications for semantic segmentation. Common pre-processing techniques for ensuring optimal data preparation are also covered. These include methods for image normalization and chipping, as well as strategies for addressing data imbalance in training samples, and techniques for overcoming limited data, including augmentation techniques, transfer learning, and domain adaptation. By encompassing both the technical aspects of neural network design and the data-related considerations, this review provides researchers and practitioners with a comprehensive and up-to-date understanding of the factors involved in designing effective neural networks for semantic segmentation of Earth Observation imagery.Comment: 145 pages with 32 figure

    Deep learning in food category recognition

    Get PDF
    Integrating artificial intelligence with food category recognition has been a field of interest for research for the past few decades. It is potentially one of the next steps in revolutionizing human interaction with food. The modern advent of big data and the development of data-oriented fields like deep learning have provided advancements in food category recognition. With increasing computational power and ever-larger food datasets, the approach’s potential has yet to be realized. This survey provides an overview of methods that can be applied to various food category recognition tasks, including detecting type, ingredients, quality, and quantity. We survey the core components for constructing a machine learning system for food category recognition, including datasets, data augmentation, hand-crafted feature extraction, and machine learning algorithms. We place a particular focus on the field of deep learning, including the utilization of convolutional neural networks, transfer learning, and semi-supervised learning. We provide an overview of relevant studies to promote further developments in food category recognition for research and industrial applicationsMRC (MC_PC_17171)Royal Society (RP202G0230)BHF (AA/18/3/34220)Hope Foundation for Cancer Research (RM60G0680)GCRF (P202PF11)Sino-UK Industrial Fund (RP202G0289)LIAS (P202ED10Data Science Enhancement Fund (P202RE237)Fight for Sight (24NN201);Sino-UK Education Fund (OP202006)BBSRC (RM32G0178B8

    Automated Analysis of Drill-Core Images Using Convolutional Neural Network

    Full text link
    Drill cores provide geological and geotechnical information essential for mineral and hydrocarbon exploration. Modern core scanners can automatically produce a large number of high-resolution core-tray images or unwrapped-core images, which encode important rock properties, such as lithology and geological structures. Current core-image analysis methods, however, are based on outdated algorithms that lack generalization and robustness. In addition, current methods focus on using log data while core images often provide more reliable information about the subsurface formations. With the new era of technology and the evolution of big data and artificial intelligence, core images will be an important asset for subsurface characterization. The future of core analysis, driven by the digital archiving of cores, needs to be considered since the manual core description and its extensive time and labor requirements are outdated. This dissertation aims to lay the foundation of a ‘Digital Geologist’ using advanced machine learning algorithms. It develops and evaluates intelligent workflows using Convolutional Neural Networks (CNNs) to automate core-image analysis, and thus facilitate the evaluation of natural resources. It explores the feasibility of extracting different rock features from core images. First, advanced CNNs are utilized to predict major lithologies of rocks from core-tray images and an overall workflow is optimized for lithology prediction. Second, a CNN is created to assess the physical condition of cores and determine intact core sections to calculate the rock quality designation (RQD) index, which is essential in many geotechnical applications. Third, an innovative approach is developed to extract fractures from unwrapped-core images and determine fracture depth and orientation. The workflow is based on using a state-of-the-art CNN model for instance segmentation, the Mask Region-based Convolutional Neural Network (Mask R-CNN). Lastly, fracture analysis from unwrapped-core images is further studied to obtain more detailed characteristics represented by fracture apertures. Overall, the thesis proposes a transformed workflow of core-image analysis that can be a platform for future studies with potential application in the mining and petroleum industries

    An Evaluation of Deep Learning-Based Object Identification

    Get PDF
    Identification of instances of semantic objects of a particular class, which has been heavily incorporated in people's lives through applications like autonomous driving and security monitoring, is one of the most crucial and challenging areas of computer vision. Recent developments in deep learning networks for detection have improved object detector accuracy. To provide a detailed review of the current state of object detection pipelines, we begin by analyzing the methodologies employed by classical detection models and providing the benchmark datasets used in this study. After that, we'll have a look at the one- and two-stage detectors in detail, before concluding with a summary of several object detection approaches. In addition, we provide a list of both old and new apps. It's not just a single branch of object detection that is examined. Finally, we look at how to utilize various object detection algorithms to create a system that is both efficient and effective. and identify a number of emerging patterns in order to better understand the using the most recent algorithms and doing more study

    Advancing Land Cover Mapping in Remote Sensing with Deep Learning

    Get PDF
    Automatic mapping of land cover in remote sensing data plays an increasingly significant role in several earth observation (EO) applications, such as sustainable development, autonomous agriculture, and urban planning. Due to the complexity of the real ground surface and environment, accurate classification of land cover types is facing many challenges. This thesis provides novel deep learning-based solutions to land cover mapping challenges such as how to deal with intricate objects and imbalanced classes in multi-spectral and high-spatial resolution remote sensing data. The first work presents a novel model to learn richer multi-scale and global contextual representations in very high-resolution remote sensing images, namely the dense dilated convolutions' merging (DDCM) network. The proposed method is light-weighted, flexible and extendable, so that it can be used as a simple yet effective encoder and decoder module to address different classification and semantic mapping challenges. Intensive experiments on different benchmark remote sensing datasets demonstrate that the proposed method can achieve better performance but consume much fewer computation resources compared with other published methods. Next, a novel graph model is developed for capturing long-range pixel dependencies in remote sensing images to improve land cover mapping. One key component in the method is the self-constructing graph (SCG) module that can effectively construct global context relations (latent graph structure) without requiring prior knowledge graphs. The proposed SCG-based models achieved competitive performance on different representative remote sensing datasets with faster training and lower computational cost compared to strong baseline models. The third work introduces a new framework, namely the multi-view self-constructing graph (MSCG) network, to extend the vanilla SCG model to be able to capture multi-view context representations with rotation invariance to achieve improved segmentation performance. Meanwhile, a novel adaptive class weighting loss function is developed to alleviate the issue of class imbalance commonly found in EO datasets for semantic segmentation. Experiments on benchmark data demonstrate the proposed framework is computationally efficient and robust to produce improved segmentation results for imbalanced classes. To address the key challenges in multi-modal land cover mapping of remote sensing data, namely, 'what', 'how' and 'where' to effectively fuse multi-source features and to efficiently learn optimal joint representations of different modalities, the last work presents a compact and scalable multi-modal deep learning framework (MultiModNet) based on two novel modules: the pyramid attention fusion module and the gated fusion unit. The proposed MultiModNet outperforms the strong baselines on two representative remote sensing datasets with fewer parameters and at a lower computational cost. Extensive ablation studies also validate the effectiveness and flexibility of the framework

    Leveraging Supervoxels for Medical Image Volume Segmentation With Limited Supervision

    Get PDF
    The majority of existing methods for machine learning-based medical image segmentation are supervised models that require large amounts of fully annotated images. These types of datasets are typically not available in the medical domain and are difficult and expensive to generate. A wide-spread use of machine learning based models for medical image segmentation therefore requires the development of data-efficient algorithms that only require limited supervision. To address these challenges, this thesis presents new machine learning methodology for unsupervised lung tumor segmentation and few-shot learning based organ segmentation. When working in the limited supervision paradigm, exploiting the available information in the data is key. The methodology developed in this thesis leverages automatically generated supervoxels in various ways to exploit the structural information in the images. The work on unsupervised tumor segmentation explores the opportunity of performing clustering on a population-level in order to provide the algorithm with as much information as possible. To facilitate this population-level across-patient clustering, supervoxel representations are exploited to reduce the number of samples, and thereby the computational cost. In the work on few-shot learning-based organ segmentation, supervoxels are used to generate pseudo-labels for self-supervised training. Further, to obtain a model that is robust to the typically large and inhomogeneous background class, a novel anomaly detection-inspired classifier is proposed to ease the modelling of the background. To encourage the resulting segmentation maps to respect edges defined in the input space, a supervoxel-informed feature refinement module is proposed to refine the embedded feature vectors during inference. Finally, to improve trustworthiness, an architecture-agnostic mechanism to estimate model uncertainty in few-shot segmentation is developed. Results demonstrate that supervoxels are versatile tools for leveraging structural information in medical data when training segmentation models with limited supervision

    Improving the accuracy of weed species detection for robotic weed control in complex real-time environments

    Get PDF
    Alex Olsen applied deep learning and machine vision to improve the accuracy of weed species detection in real time complex environments. His robotic weed control prototype, AutoWeed, presents a new efficient tool for weed management in crop and pasture and has launched a startup agricultural technology company

    A PhD Dissertation on Road Topology Classification for Autonomous Driving

    Get PDF
    La clasificaci´on de la topolog´ıa de la carretera es un punto clave si queremos desarrollar sistemas de conducci´on aut´onoma completos y seguros. Es l´ogico pensar que la comprensi ´on de forma exhaustiva del entorno que rodea al vehiculo, tal como sucede cuando es un ser humano el que toma las decisiones al volante, es una condici´on indispensable si se quiere avanzar en la consecuci´on de veh´ıculos aut´onomos de nivel 4 o 5. Si el conductor, ya sea un sistema aut´onomo, como un ser humano, no tiene acceso a la informaci´on del entorno la disminuci´on de la seguridad es cr´ıtica y el accidente es casi instant´aneo i.e., cuando un conductor se duerme al volante. A lo largo de esta tesis doctoral se presentan sendos sistemas basados en deep leaning que ayudan al sistema de conducci´on aut´onoma a comprender el entorno en el que se encuentra en ese instante. El primero de ellos 3D-Deep y su optimizaci´on 3D-Deepest, es una nueva arquitectura de red para la segmentaci´on sem´antica de carretera en el que se integran fuentes de datos de diferente tipolog´ıa. La segmentaci´on de carretera es clave en un veh´ıculo aut´onomo, ya que es el medio por el que deber´ıa circular en el 99,9% de los casos. El segundo es un sistema de clasificaci´on de intersecciones urbanas mediante diferentes enfoques comprendidos dentro del metric-learning, la integraci´on temporal y la generaci´on de im´agenes sint´eticas. La seguridad es un punto clave en cualquier sistema aut´onomo, y si es de conducci´on a´un m´as. Las intersecciones son uno de los lugares dentro de las ciudades donde la seguridad es cr´ıtica. Los coches siguen trayectorias secantes y por tanto pueden colisionar, la mayor´ıa de ellas son usadas por los peatones para atravesar la v´ıa independientemente de si existen pasos de cebra o no, lo que incrementa de forma alarmante los riesgos de atropello y colisi´on. La implementaci´on de la combinaci´on de ambos sistemas mejora substancialmente la comprensi´on del entorno, y puede considerarse que incrementa la seguridad, allanando el camino en la investigaci´on hacia un veh´ıculo completamente aut´onomo.Road topology classification is a crucial point if we want to develop complete and safe autonomous driving systems. It is logical to think that a thorough understanding of the environment surrounding the ego-vehicle, as it happens when a human being is a decision-maker at the wheel, is an indispensable condition if we want to advance in the achievement of level 4 or 5 autonomous vehicles. If the driver, either an autonomous system or a human being, does not have access to the information of the environment, the decrease in safety is critical, and the accident is almost instantaneous, i.e., when a driver falls asleep at the wheel. Throughout this doctoral thesis, we present two deep learning systems that will help an autonomous driving system understand the environment in which it is at that instant. The first one, 3D-Deep and its optimization 3D-Deepest, is a new network architecture for semantic road segmentation in which data sources of different types are integrated. Road segmentation is vital in an autonomous vehicle since it is the medium on which it should drive in 99.9% of the cases. The second is an urban intersection classification system using different approaches comprised of metric-learning, temporal integration, and synthetic image generation. Safety is a crucial point in any autonomous system, and if it is a driving system, even more so. Intersections are one of the places within cities where safety is critical. Cars follow secant trajectories and therefore can collide; most of them are used by pedestrians to cross the road regardless of whether there are crosswalks or not, which alarmingly increases the risks of being hit and collision. The implementation of the combination of both systems substantially improves the understanding of the environment and can be considered to increase safety, paving the way in the research towards a fully autonomous vehicle

    Deep Learning Methods for Remote Sensing

    Get PDF
    Remote sensing is a field where important physical characteristics of an area are exacted using emitted radiation generally captured by satellite cameras, sensors onboard aerial vehicles, etc. Captured data help researchers develop solutions to sense and detect various characteristics such as forest fires, flooding, changes in urban areas, crop diseases, soil moisture, etc. The recent impressive progress in artificial intelligence (AI) and deep learning has sparked innovations in technologies, algorithms, and approaches and led to results that were unachievable until recently in multiple areas, among them remote sensing. This book consists of sixteen peer-reviewed papers covering new advances in the use of AI for remote sensing

    A Comprehensive Review on Computer Vision Analysis of Aerial Data

    Full text link
    With the emergence of new technologies in the field of airborne platforms and imaging sensors, aerial data analysis is becoming very popular, capitalizing on its advantages over land data. This paper presents a comprehensive review of the computer vision tasks within the domain of aerial data analysis. While addressing fundamental aspects such as object detection and tracking, the primary focus is on pivotal tasks like change detection, object segmentation, and scene-level analysis. The paper provides the comparison of various hyper parameters employed across diverse architectures and tasks. A substantial section is dedicated to an in-depth discussion on libraries, their categorization, and their relevance to different domain expertise. The paper encompasses aerial datasets, the architectural nuances adopted, and the evaluation metrics associated with all the tasks in aerial data analysis. Applications of computer vision tasks in aerial data across different domains are explored, with case studies providing further insights. The paper thoroughly examines the challenges inherent in aerial data analysis, offering practical solutions. Additionally, unresolved issues of significance are identified, paving the way for future research directions in the field of aerial data analysis.Comment: 112 page
    corecore