10 research outputs found
Hydrodynamics of Conical Spouted Beds with High Density Particles
An extensive experimental investigation of conical spouted beds with high density particles were carried out by measuring bed pressure drop, particle velocity and solids hold-up in a 15 cm ID conical spouted bed at three different cone angles (30°, 45°, 60°) with Yttria-stabilized zirconia (YSZ) particles (ρp = 6050 kg/m3). The results show that the minimum external spouting velocity increases with cone angle, particle diameter and static bed height. The bed is characterized by two regions: upward moving particles with high slip in the spout and slowly downward moving particles at loosely packed conditions in the annulus
Assessing generalisability of deep learning-based polyp detection and segmentation methods through a computer vision challenge
Polyps are well-known cancer precursors identified by colonoscopy. However, variability in their size, appearance, and location makes the detection of polyps challenging. Moreover, colonoscopy surveillance and removal of polyps are highly operator-dependent procedures and occur in a highly complex organ topology. There exists a high missed detection rate and incomplete removal of colonic polyps. To assist in clinical procedures and reduce missed rates, automated methods for detecting and segmenting polyps using machine learning have been achieved in past years. However, the major drawback in most of these methods is their ability to generalise to out-of-sample unseen datasets from different centres, populations, modalities, and acquisition systems. To test this hypothesis rigorously, we, together with expert gastroenterologists, curated a multi-centre and multi-population dataset acquired from six different colonoscopy systems and challenged the computational expert teams to develop robust automated detection and segmentation methods in a crowd-sourcing Endoscopic computer vision challenge. This work put forward rigorous generalisability tests and assesses the usability of devised deep learning methods in dynamic and actual clinical colonoscopy procedures. We analyse the results of four top performing teams for the detection task and five top performing teams for the segmentation task. Our analyses demonstrate that the top-ranking teams concentrated mainly on accuracy over the real-time performance required for clinical applicability. We further dissect the devised methods and provide an experiment-based hypothesis that reveals the need for improved generalisability to tackle diversity present in multi-centre datasets and routine clinical procedures
Assessing generalisability of deep learning-based polyp detection and segmentation methods through a computer vision challenge
Polyps are well-known cancer precursors identified by colonoscopy. However, variability in their size, appearance, and location makes the detection of polyps challenging. Moreover, colonoscopy surveillance and removal of polyps are highly operator-dependent procedures and occur in a highly complex organ topology. There exists a high missed detection rate and incomplete removal of colonic polyps. To assist in clinical procedures and reduce missed rates, automated methods for detecting and segmenting polyps using machine learning have been achieved in past years. However, the major drawback in most of these methods is their ability to generalise to out-of-sample unseen datasets from different centres, populations, modalities, and acquisition systems. To test this hypothesis rigorously, we, together with expert gastroenterologists, curated a multi-centre and multi-population dataset acquired from six different colonoscopy systems and challenged the computational expert teams to develop robust automated detection and segmentation methods in a crowd-sourcing Endoscopic computer vision challenge. This work put forward rigorous generalisability tests and assesses the usability of devised deep learning methods in dynamic and actual clinical colonoscopy procedures. We analyse the results of four top performing teams for the detection task and five top performing teams for the segmentation task. Our analyses demonstrate that the top-ranking teams concentrated mainly on accuracy over the real-time performance required for clinical applicability. We further dissect the devised methods and provide an experiment-based hypothesis that reveals the need for improved generalisability to tackle diversity present in multi-centre datasets and routine clinical procedures
Common Limitations of Image Processing Metrics:A Picture Story
While the importance of automatic image analysis is continuously increasing,
recent meta-research revealed major flaws with respect to algorithm validation.
Performance metrics are particularly key for meaningful, objective, and
transparent performance assessment and validation of the used automatic
algorithms, but relatively little attention has been given to the practical
pitfalls when using specific metrics for a given image analysis task. These are
typically related to (1) the disregard of inherent metric properties, such as
the behaviour in the presence of class imbalance or small target structures,
(2) the disregard of inherent data set properties, such as the non-independence
of the test cases, and (3) the disregard of the actual biomedical domain
interest that the metrics should reflect. This living dynamically document has
the purpose to illustrate important limitations of performance metrics commonly
applied in the field of image analysis. In this context, it focuses on
biomedical image analysis problems that can be phrased as image-level
classification, semantic segmentation, instance segmentation, or object
detection task. The current version is based on a Delphi process on metrics
conducted by an international consortium of image analysis experts from more
than 60 institutions worldwide.Comment: This is a dynamic paper on limitations of commonly used metrics. The
current version discusses metrics for image-level classification, semantic
segmentation, object detection and instance segmentation. For missing use
cases, comments or questions, please contact [email protected] or
[email protected]. Substantial contributions to this document will be
acknowledged with a co-authorshi
Effect of input size on the classification of lung nodules using convolutional neural networks Akciǧer nodüllerinin evrişimsel sinir aǧlari kullanilarak siniflandirilmasinda girdi boyutunun etkisi
Recent studies have shown that lung cancer screening using annual low-dose computed tomography (CT) reduces lung cancer mortality by 20% compared to traditional chest radiography. Therefore, CT lung screening has started to be used widely all across the world. However, analyzing these images is a serious burden for radiologists. The number of slices in a CT scan can be up to 600. Therefore, computer-aided-detection (CAD) systems are very important for faster and more accurate assessment of the data. In this study, we proposed a framework that analyzes CT lung screenings using convolutional neural networks (CNNs) to reduce false positives. We trained our model with different volume sizes and showed that volume size plays a critical role in the performance of the system. We also used different fusions in order to show their power and effect on the overall accuracy. 3D CNNs were preferred over 2D CNNs because 2D convolutional operations applied to 3D data could result in information loss. The proposed framework has been tested on the dataset provided by the LUNA16 Challenge and resulted in a sensitivity of 0.831 at 1 false positive per scan
Endoscopic artefact detection with ensemble of deep neural networks and false positive elimination
Video frames obtained through endoscopic examination can be corrupted by many artefacts. These artefacts adversely affect the diagnosis process and make the examination of the underlying tissue difficult for the professionals. In addition, detection of these artefacts is essential for further automated analysis of the images and high-quality frame restoration. In this study, we propose an endoscopic artefact detection framework based on an ensemble of deep neural networks, classagnostic non-maximum suppression, and false-positive elimination. We have used different ensemble techniques and combined both one-stage and two-stage networks to have a heterogeneous solution exploiting the distinctive properties of different approaches. Faster R-CNN, Cascade R-CNN, which are two-stage detector, and RetinaNet, which is single-stage detector, have been used as base models. The best results have been obtained using the consensus of their predictions, which were passed through class-agnostic non-maximum suppression, and false-positive elimination
Class Distance Weighted Cross-Entropy Loss for Ulcerative Colitis Severity Estimation
In scoring systems used to measure the endoscopic activity of ulcerative
colitis, such as Mayo endoscopic score or Ulcerative Colitis Endoscopic Index
Severity, levels increase with severity of the disease activity. Such relative
ranking among the scores makes it an ordinal regression problem. On the other
hand, most studies use categorical cross-entropy loss function to train deep
learning models, which is not optimal for the ordinal regression problem. In
this study, we propose a novel loss function, class distance weighted
cross-entropy (CDW-CE), that respects the order of the classes and takes the
distance of the classes into account in calculation of the cost. Experimental
evaluations show that models trained with CDW-CE outperform the models trained
with conventional categorical cross-entropy and other commonly used loss
functions which are designed for the ordinal regression problems. In addition,
the class activation maps of models trained with CDW-CE loss are more
class-discriminative and they are found to be more reasonable by the domain
experts.Comment: 26th UK Conference on Medical Image Understanding and Analysis. 15
pages, 5 figure
The comparison of exacerbation and pneumonia before and after conjugated pneumococcal vaccination in patients with chronic obstructive pulmonary disease, and the effect of inhaled corticosteroid use on results
The comparison of exacerbation and pneumonia before and after conjugated pneumococcal vaccination in patients with chronic obstructive pulmonary disease, and the effect of inhaled corticosteroid use on results Introduction: Pneumococcal infections and exacerbations are important causes of mortality and morbidity in chronic obstructive pulmonary disease (COPD). The use of inhaled corticosteroids and pneumococcal vaccination are suggested for the control of the disease progression and exacerbations. The aim of this study is to assess the effect of pneumococcal conjugate vaccine on pneumonia and exacerbation in COPD patients using inhaled corticosteroids (ICSs). The secondary aim is to analyze the effect of ICS use and different ICS types, if administered, on exacerbation and pneumonia incidence in the study population. Materials and Methods: Medical records of 108 adult patients with COPD who were vaccinated with the pneumococcal conjugate vaccine (PCV13) were retrospectively evaluated. The number of acute exacerbations and pneumonia within one year before and after vaccination were evaluated in all included COPD patients. The comparison analysis was also performed based on the ICS types. Results: There were statistically significant differences between the mean numbers of pneumonia and exacerbations before and after vaccination (p 0.05). Conclusion: This study revealed that PCV13 provides a significant decrease in both exacerbation and pneumonia episodes in COPD patients. On the other hand, the use of ICSs and the types of ICSs were not found to have adverse effects on pneumonia and acute exacerbations in vaccinated COPD patients
Assessing generalisability of deep learning-based polyp detection and segmentation methods through a computer vision challenge
Polyps are well-known cancer precursors identified by colonoscopy. However, variability in their size, location, and surface largely affect identification, localisation, and characterisation. Moreover, colonoscopic surveillance and removal of polyps (referred to as polypectomy ) are highly operator-dependent procedures. There exist a high missed detection rate and incomplete removal of colonic polyps due to their variable nature, the difficulties to delineate the abnormality, the high recurrence rates, and the anatomical topography of the colon. There have been several developments in realising automated methods for both detection and segmentation of these polyps using machine learning. However, the major drawback in most of these methods is their ability to generalise to out-of-sample unseen datasets that come from different centres, modalities and acquisition systems. To test this hypothesis rigorously we curated a multi-centre and multi-population dataset acquired from multiple colonoscopy systems and challenged teams comprising machine learning experts to develop robust automated detection and segmentation methods as part of our crowd-sourcing Endoscopic computer vision challenge (EndoCV) 2021. In this paper, we analyse the detection results of the four top (among seven) teams and the segmentation results of the five top teams (among 16). Our analyses demonstrate that the top-ranking teams concentrated on accuracy (i.e., accuracy > 80% on overall Dice score on different validation sets) over real-time performance required for clinical applicability. We further dissect the methods and provide an experiment-based hypothesis that reveals the need for improved generalisability to tackle diversity present in multi-centre datasets