9 research outputs found

    Assessing generalisability of deep learning-based polyp detection and segmentation methods through a computer vision challenge

    Get PDF
    Polyps are well-known cancer precursors identified by colonoscopy. However, variability in their size, appearance, and location makes the detection of polyps challenging. Moreover, colonoscopy surveillance and removal of polyps are highly operator-dependent procedures and occur in a highly complex organ topology. There exists a high missed detection rate and incomplete removal of colonic polyps. To assist in clinical procedures and reduce missed rates, automated methods for detecting and segmenting polyps using machine learning have been achieved in past years. However, the major drawback in most of these methods is their ability to generalise to out-of-sample unseen datasets from different centres, populations, modalities, and acquisition systems. To test this hypothesis rigorously, we, together with expert gastroenterologists, curated a multi-centre and multi-population dataset acquired from six different colonoscopy systems and challenged the computational expert teams to develop robust automated detection and segmentation methods in a crowd-sourcing Endoscopic computer vision challenge. This work put forward rigorous generalisability tests and assesses the usability of devised deep learning methods in dynamic and actual clinical colonoscopy procedures. We analyse the results of four top performing teams for the detection task and five top performing teams for the segmentation task. Our analyses demonstrate that the top-ranking teams concentrated mainly on accuracy over the real-time performance required for clinical applicability. We further dissect the devised methods and provide an experiment-based hypothesis that reveals the need for improved generalisability to tackle diversity present in multi-centre datasets and routine clinical procedures

    Assessing generalisability of deep learning-based polyp detection and segmentation methods through a computer vision challenge

    Get PDF
    Polyps are well-known cancer precursors identified by colonoscopy. However, variability in their size, appearance, and location makes the detection of polyps challenging. Moreover, colonoscopy surveillance and removal of polyps are highly operator-dependent procedures and occur in a highly complex organ topology. There exists a high missed detection rate and incomplete removal of colonic polyps. To assist in clinical procedures and reduce missed rates, automated methods for detecting and segmenting polyps using machine learning have been achieved in past years. However, the major drawback in most of these methods is their ability to generalise to out-of-sample unseen datasets from different centres, populations, modalities, and acquisition systems. To test this hypothesis rigorously, we, together with expert gastroenterologists, curated a multi-centre and multi-population dataset acquired from six different colonoscopy systems and challenged the computational expert teams to develop robust automated detection and segmentation methods in a crowd-sourcing Endoscopic computer vision challenge. This work put forward rigorous generalisability tests and assesses the usability of devised deep learning methods in dynamic and actual clinical colonoscopy procedures. We analyse the results of four top performing teams for the detection task and five top performing teams for the segmentation task. Our analyses demonstrate that the top-ranking teams concentrated mainly on accuracy over the real-time performance required for clinical applicability. We further dissect the devised methods and provide an experiment-based hypothesis that reveals the need for improved generalisability to tackle diversity present in multi-centre datasets and routine clinical procedures

    Unsupervised Learning-Based Plant Pipeline Leak Detection Using Frequency Spectrum Feature Extraction and Transfer Learning

    No full text
    The deterioration of power generation facilities built during the early stages of plant operation is becoming increasingly severe, raising concerns about potential socioeconomic harm from pipe leaks. Consequently, there is a pressing need for rapid leak detection and proactive responses. Prior research primarily relied on various signal processing techniques and supervised learning for leak detection. However, these approaches struggle with accurate detection amid environments with diverse background noises and weak leak signals, exacerbated by challenges in gathering sufficient real-world leakage data, which can lead to overfitting during model learning. Therefore, in this paper, an adaptable leak detection model suitable for various environments was proposed to ensure precise leak detection. Frequency spectrum feature extraction and transfer learning were utilized to achieve accurate leak detection, even with limited data. In addition, an unsupervised learning-based autoencoder model is employed to identify leaks accurately by learning general patterns, even when leakage data is limited. Experimental results demonstrate that the proposed model, integrating feature extraction techniques using the Uniform Manifold Approximation and Projection (UMAP) algorithm and employing transfer learning, achieved a higher accuracy performance with 6.35 percentage points (%p) compared to the model lacking these techniques. In addition, these findings confirm a slight decrease in accuracy performance even when using minimal learning data. Moreover, the leak detection performance was superior to the existing models considered in this study, achieving a high accuracy rate of 99.19%

    Adversarial Optimization-Based Knowledge Transfer of Layer-Wise Dense Flow for Image Classification

    No full text
    A deep-learning technology for knowledge transfer is necessary to advance and optimize efficient knowledge distillation. Here, we aim to develop a new adversarial optimization-based knowledge transfer method involved with a layer-wise dense flow that is distilled from a pre-trained deep neural network (DNN). Knowledge distillation transferred to another target DNN based on adversarial loss functions has multiple flow-based knowledge items that are densely extracted by overlapping them from a pre-trained DNN to enhance the existing knowledge. We propose a semi-supervised learning-based knowledge transfer with multiple items of dense flow-based knowledge extracted from the pre-trained DNN. The proposed loss function would comprise a supervised cross-entropy loss for a typical classification, an adversarial training loss for the target DNN and discriminators, and Euclidean distance-based loss in terms of dense flow. For both pre-trained and target DNNs considered in this study, we adopt a residual network (ResNet) architecture. We propose methods of (1) the adversarial-based knowledge optimization, (2) the extended and flow-based knowledge transfer scheme, and (3) the combined layer-wise dense flow in an adversarial network. The results show that it provides higher accuracy performance in the improved target ResNet compared to the prior knowledge transfer methods

    Low-Power Wireless Sensor Module for Machine Learning-Based Continuous Monitoring of Nuclear Power Plants

    No full text
    This paper introduces the novel design and implementation of a low-power wireless monitoring system designed for nuclear power plants, aiming to enhance safety and operational efficiency. By utilizing advanced signal-processing techniques and energy-efficient technologies, the system supports real-time, continuous monitoring without the need for frequent battery replacements. This addresses the high costs and risks associated with traditional wired monitoring methods. The system focuses on acoustic and ultrasonic analysis, capturing sound using microphones and processing these signals through heterodyne frequency conversion for effective signal management, accommodating low-power consumption through down-conversion. Integrated with edge computing, the system processes data locally at the sensor level, optimizing response times to anomalies and reducing network load. Practical implementation shows significant reductions in maintenance overheads and environmental impact, thereby enhancing the reliability and safety of nuclear power plant operations. The study also sets the groundwork for future integration of sophisticated machine learning algorithms to advance predictive maintenance capabilities in nuclear energy management

    Two-Stage Classification Method for MSI Status Prediction Based on Deep Learning Approach

    No full text
    Colorectal cancer is one of the most common cancers with a high mortality rate. The determination of microsatellite instability (MSI) status in resected cancer tissue is vital because it helps diagnose the related disease and determine the relevant treatment. This paper presents a two-stage classification method for predicting the MSI status based on a deep learning approach. The proposed pipeline includes the serial connection of the segmentation network and the classification network. In the first stage, the tumor area is segmented from the given pathological image using the Feature Pyramid Network (FPN). In the second stage, the segmented tumor is classified as MSI-L or MSI-H using Inception-Resnet-V2. We examined the performance of the proposed method using pathological images with 10× and 20× magnifications, in comparison with that of the conventional multiclass classification method where the tissue type is identified in one stage. The F1-score of the proposed method was higher than that of the conventional method at both 10× and 20× magnifications. Furthermore, we verified that the F1-score for 20× magnification was better than that for 10× magnification

    Deep transfer learning for the classification of variable sources

    No full text
    Ongoing or upcoming surveys such as Gaia, ZTF, or LSST will observe the light curves of billions or more astronomical sources. This presents new challenges for identifying interesting and important types of variability. Collecting a sufficient amount of labeled data for training is difficult, especially in the early stages of a new survey. Here we develop a single-band light-curve classifier based on deep neural networks and use transfer learning to address the training data paucity problem by conveying knowledge from one data set to another. First we train a neural network on 16 variability features extracted from the light curves of OGLE and EROS-2 variables. We then optimize this model using a small set (e.g., 5%) of periodic variable light curves from the ASAS data set in order to transfer knowledge inferred from OGLE and EROS-2 to a new ASAS classifier. With this we achieve good classification results on ASAS, thereby showing that knowledge can be successfully transferred between data sets. We demonstrate similar transfer learning using HIPPARCO

    A Deep Learning-Based Crop Disease Diagnosis Method Using Multimodal Mixup Augmentation

    No full text
    With the widespread adoption of smart farms and continuous advancements in IoT (Internet of Things) technology, acquiring diverse additional data has become increasingly convenient. Consequently, studies relevant to deep learning models that leverage multimodal data for crop disease diagnosis and associated data augmentation methods are significantly growing. We propose a comprehensive deep learning model that predicts crop type, detects disease presence, and assesses disease severity at the same time. We utilize multimodal data comprising crop images and environmental variables such as temperature, humidity, and dew points. We confirmed that the results of diagnosing crop diseases using multimodal data improved 2.58%p performance compared to using crop images only. We also propose a multimodal-based mixup augmentation method capable of utilizing both image and environmental data. In this study, multimodal data refer to data from multiple sources, and multimodal mixup is a data augmentation technique that combines multimodal data for training. This expands the conventional mixup technique that was originally applied solely to image data. Our multimodal mixup augmentation method showcases a performance improvement of 1.33%p compared to the original mixup method

    Assessing generalisability of deep learning-based polyp detection and segmentation methods through a computer vision challenge

    No full text
    Polyps are well-known cancer precursors identified by colonoscopy. However, variability in their size, location, and surface largely affect identification, localisation, and characterisation. Moreover, colonoscopic surveillance and removal of polyps (referred to as polypectomy ) are highly operator-dependent procedures. There exist a high missed detection rate and incomplete removal of colonic polyps due to their variable nature, the difficulties to delineate the abnormality, the high recurrence rates, and the anatomical topography of the colon. There have been several developments in realising automated methods for both detection and segmentation of these polyps using machine learning. However, the major drawback in most of these methods is their ability to generalise to out-of-sample unseen datasets that come from different centres, modalities and acquisition systems. To test this hypothesis rigorously we curated a multi-centre and multi-population dataset acquired from multiple colonoscopy systems and challenged teams comprising machine learning experts to develop robust automated detection and segmentation methods as part of our crowd-sourcing Endoscopic computer vision challenge (EndoCV) 2021. In this paper, we analyse the detection results of the four top (among seven) teams and the segmentation results of the five top teams (among 16). Our analyses demonstrate that the top-ranking teams concentrated on accuracy (i.e., accuracy > 80% on overall Dice score on different validation sets) over real-time performance required for clinical applicability. We further dissect the methods and provide an experiment-based hypothesis that reveals the need for improved generalisability to tackle diversity present in multi-centre datasets
    corecore