198 research outputs found
Deep learning-based improvement for the outcomes of glaucoma clinical trials
Glaucoma is the leading cause of irreversible blindness worldwide. It is a progressive optic neuropathy in which retinal ganglion cell (RGC) axon loss, probably as a consequence of damage at the optic disc, causes a loss of vision, predominantly affecting the mid-peripheral visual field (VF). Glaucoma results in a decrease in vision-related quality of life and, therefore, early detection and evaluation of disease progression rates is crucial in order to assess the risk of functional impairment and to establish sound treatment strategies. The aim of my research is to improve glaucoma diagnosis by enhancing state of the art analyses of glaucoma clinical trial outcomes using advanced analytical methods. This knowledge would also help better design and analyse clinical trials, providing evidence for re-evaluating existing medications, facilitating diagnosis and suggesting novel disease management.
To facilitate my objective methodology, this thesis provides the following contributions: (i) I developed deep learning-based super-resolution (SR) techniques for optical coherence tomography (OCT) image enhancement and demonstrated that using super-resolved images improves the statistical power of clinical trials, (ii) I developed a deep learning algorithm for segmentation of retinal OCT images, showing that the methodology consistently produces more accurate segmentations than state-of-the-art networks, (iii) I developed a deep learning framework for refining the relationship between structural and functional measurements and demonstrated that the mapping is significantly improved over previous techniques, iv) I developed a probabilistic method and demonstrated that glaucomatous disc haemorrhages are influenced by a possible systemic factor that makes both eyes bleed simultaneously. v) I recalculated VF slopes, using the retinal never fiber layer thickness (RNFLT) from the super-resolved OCT as a Bayesian prior and demonstrated that use of VF rates with the Bayesian prior as the outcome measure leads to a reduction in the sample size required to distinguish treatment arms in a clinical trial
A methodology for peripheral nerve segmentation using a multiple annotators approach based on Centered Kernel Alignment
Peripheral Nerve Blocking (PNB) is a technique commonly used to perform regional
anesthesia and for pain management. The success of PNB procedures depends on the accurate
location of the target nerve. Recently, ultrasound imaging has been widely used to locate
nerve structures to carry out PNB, due to it enables a non-invasive visualization of the
target nerve and the anatomical structures around it. However, the ultrasound images are
affected by several artifacts making difficult the accurate delimitation of nerves. In the
literature, several approaches have been proposed to carry out automatic or semi-automatic
segmentation. Nevertheless, these methods are designed assuming that the gold standard
is available, and for this segmentation problem this gold standard can not be obtained
considering that it corresponds to subjective interpretation. In this sense, for building those
segmentation models, we do not have access to the actual label but an amount of subjective
annotations provided by multiple experts. To deal with this drawback we use the concepts
of a relatively new area of machine learning known as “Learning from crowds”, this area
deals with supervised learning problems considering the case when the gold standard is not
available.
In this project, we develop a nerve segmentation system that includes: a preprocessing
stage, feature extraction methodology based on adaptive methods, and a Centered Kernel
Alignment (CKA) based representation to measure the annotators performance for building
a classifier with multiple annotators in order to support peripheral nerve segmentation.
Our approach to classification with multiple annotators based on CKA is tested on both
simulated data and real data; similarly, the methodology of automatic segmentation proposed
in this work was tested over ultrasound images labeled by a set of specialists who give their
opinion about the location of nerve structures. According to the results, we conclude that
our methodology can be used to locate nerve structures in ultrasound images even if the
gold standard (the actual location of nerve structures) is not available in the training stage.
Moreover, we determine that the approach proposed in this work could be implemented as
a guiding tool for the anesthesiologist to carry out PNB procedures assisted by ultrasound
imaging
A methodology for peripheral nerve segmentation using a multiple annotators approach based on Centered Kernel Alignment
Peripheral Nerve Blocking (PNB) is a technique commonly used to perform regional
anesthesia and for pain management. The success of PNB procedures depends on the accurate
location of the target nerve. Recently, ultrasound imaging has been widely used to locate
nerve structures to carry out PNB, due to it enables a non-invasive visualization of the
target nerve and the anatomical structures around it. However, the ultrasound images are
affected by several artifacts making difficult the accurate delimitation of nerves. In the
literature, several approaches have been proposed to carry out automatic or semi-automatic
segmentation. Nevertheless, these methods are designed assuming that the gold standard
is available, and for this segmentation problem this gold standard can not be obtained
considering that it corresponds to subjective interpretation. In this sense, for building those
segmentation models, we do not have access to the actual label but an amount of subjective
annotations provided by multiple experts. To deal with this drawback we use the concepts
of a relatively new area of machine learning known as “Learning from crowds”, this area
deals with supervised learning problems considering the case when the gold standard is not
available.
In this project, we develop a nerve segmentation system that includes: a preprocessing
stage, feature extraction methodology based on adaptive methods, and a Centered Kernel
Alignment (CKA) based representation to measure the annotators performance for building
a classifier with multiple annotators in order to support peripheral nerve segmentation.
Our approach to classification with multiple annotators based on CKA is tested on both
simulated data and real data; similarly, the methodology of automatic segmentation proposed
in this work was tested over ultrasound images labeled by a set of specialists who give their
opinion about the location of nerve structures. According to the results, we conclude that
our methodology can be used to locate nerve structures in ultrasound images even if the
gold standard (the actual location of nerve structures) is not available in the training stage.
Moreover, we determine that the approach proposed in this work could be implemented as
a guiding tool for the anesthesiologist to carry out PNB procedures assisted by ultrasound
imaging
Segmentación automática de estructuras nerviosas en imágenes de ultrasonido: una comparación entre técnicas de procesamiento de imágenes y modelos Bayesianos no paramétricos
Un gran número de casos relacionados con el dolor crónico, debido a accidentes, enfermedades o intervenciones quirúrgicas dependen de prácticas anestesiológicas. Estas prácticas son asistidas mediante imágenes de ultrasonido. Aunque, las imágenes de ultrasonido son una herramienta útil que tiene como propósito guiar al especialista en anestesiología, la falta de inteligibilidad debido a un ruido multiplicatico acústico conocido
como speckle, hace de este tipo de intervención quirúrgica una tarea difícil. De la misma manera, algunos artefactos son introducidos en el proceso de captura, desafiando al experto en anestesiología de no confundir las verdaderas estructuras nerviosas. En consecuencia, una metodología de asistencia usando procesamiento de señales podría mejorar la precisión en la práctica de anestesiología. Este trabajo propone dos métodos para la segmentación de nervios periféricos en imágenes médicas de ultrasonido, el primero con base en
un Modelo de Forma Activa y el segundo con base en un Modelo Bayesiano no paramétrico de Clustering Jerárquico. La comparación de los resultados experimentales muestra un mejor desempeño de segmentación para el modelo no paramétrico, con un error cuadrático medio 1;026 0;379 pixeles para el nervio cubital, 0;704 0;233 pixeles para el nervio mediano y 1;698 0;564 pixeles para el nervio peroneal. Así mismo, este modelo permite enfatizar otras estructuras blandas como músculos y tejidos acuosos. Por otra parte, el
Modelo de Forma Activa segmenta con un desempeño de error cuadrático medio de 2;610 0;486 pixeles para el nervio cubital, 2;047 0;399 pixeles para el nervio mediano y 2;808 0;369 pixeles para el nervio peroneal, con un mejor tiempo de ejecución que el modelo Bayesiano no paramétrico. Todos los resultados se validaron con etiquetas reales suministradas por un anestesiólogo
Segmentación automática de estructuras nerviosas en imágenes de ultrasonido: una comparación entre técnicas de procesamiento de imágenes y modelos Bayesianos no paramétricos
Un gran número de casos relacionados con el dolor crónico, debido a accidentes, enfermedades o intervenciones quirúrgicas dependen de prácticas anestesiológicas. Estas prácticas son asistidas mediante imágenes de ultrasonido. Aunque, las imágenes de ultrasonido son una herramienta útil que tiene como propósito guiar al especialista en anestesiología, la falta de inteligibilidad debido a un ruido multiplicatico acústico conocido
como speckle, hace de este tipo de intervención quirúrgica una tarea difícil. De la misma manera, algunos artefactos son introducidos en el proceso de captura, desafiando al experto en anestesiología de no confundir las verdaderas estructuras nerviosas. En consecuencia, una metodología de asistencia usando procesamiento de señales podría mejorar la precisión en la práctica de anestesiología. Este trabajo propone dos métodos para la segmentación de nervios periféricos en imágenes médicas de ultrasonido, el primero con base en
un Modelo de Forma Activa y el segundo con base en un Modelo Bayesiano no paramétrico de Clustering Jerárquico. La comparación de los resultados experimentales muestra un mejor desempeño de segmentación para el modelo no paramétrico, con un error cuadrático medio 1;026 0;379 pixeles para el nervio cubital, 0;704 0;233 pixeles para el nervio mediano y 1;698 0;564 pixeles para el nervio peroneal. Así mismo, este modelo permite enfatizar otras estructuras blandas como músculos y tejidos acuosos. Por otra parte, el
Modelo de Forma Activa segmenta con un desempeño de error cuadrático medio de 2;610 0;486 pixeles para el nervio cubital, 2;047 0;399 pixeles para el nervio mediano y 2;808 0;369 pixeles para el nervio peroneal, con un mejor tiempo de ejecución que el modelo Bayesiano no paramétrico. Todos los resultados se validaron con etiquetas reales suministradas por un anestesiólogo
Advancements and Breakthroughs in Ultrasound Imaging
Ultrasonic imaging is a powerful diagnostic tool available to medical practitioners, engineers and researchers today. Due to the relative safety, and the non-invasive nature, ultrasonic imaging has become one of the most rapidly advancing technologies. These rapid advances are directly related to the parallel advancements in electronics, computing, and transducer technology together with sophisticated signal processing techniques. This book focuses on state of the art developments in ultrasonic imaging applications and underlying technologies presented by leading practitioners and researchers from many parts of the world
Recommended from our members
Level set segmentation of retinal structures
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University London.Changes in retinal structure are related to different eye diseases. Various retinal imaging techniques, such as fundus imaging and optical coherence tomography (OCT) imaging modalities, have been developed for non-intrusive ophthalmology diagnoses according to the vasculature changes. However, it is time consuming or even impossible for ophthalmologists to manually label all the retinal structures from fundus images and OCT images. Therefore, computer aided diagnosis system for retinal imaging plays an important role in the assessment of ophthalmologic diseases and cardiovascular disorders. The aim of this PhD thesis is to develop segmentation methods to extract clinically useful information from these retinal images, which are acquired from different imaging modalities. In other words, we built the segmentation methods to extract important structures from both 2D fundus images and 3D OCT images. In the first part of my PhD project, two novel level set based methods were proposed for detecting the blood vessels and optic discs from fundus images. The first one integrates Chan-Vese's energy minimizing active contour method with the edge constraint term and Gaussian Mixture Model based term for blood vessels segmentation, while the second method combines the edge constraint term, the distance regularisation term and the shape-prior term for locating the optic disc. Both methods include the pre-processing stage, used for removing noise and enhancing the contrast between the
object and the background. Three automated layer segmentation methods were built for segmenting intra-retinal layers from 3D OCT macular and optic nerve head images in the second part of my PhD project. The first two methods combine different methods according to the data characteristics. First, eight boundaries of the intra-retinal layers were detected from the 3D OCT macular images and the thickness maps of the seven layers were produced. Second, four boundaries of the intra-retinal layers were located from 3D optic nerve head images and the thickness maps of the Retinal Nerve Fiber Layer (RNFL) were plotted. Finally, the choroidal layer segmentation method based on the Level Set framework was designed, which embedded with the distance regularisation term, edge constraint term and Markov Random Field modelled region term. The thickness map of the choroidal layer was calculated and shown.Department of Computer Science, Brunel University London
Machine Learning Approaches for Automated Glaucoma Detection using Clinical Data and Optical Coherence Tomography Images
Glaucoma is a multi-factorial, progressive blinding optic-neuropathy. A variety of factors, including genetics, vasculature, anatomy, and immune factors, are involved. Worldwide more than 80 million people are affected by glaucoma, and around 300,000 in Australia, where 50% remain undiagnosed. Untreated glaucoma can lead to blindness. Early detection by Artificial intelligence (AI) is crucial to accelerate the diagnosis process and can prevent further vision loss. Many proposed AI systems have shown promising performance for automated glaucoma detection using two-dimensional (2D) data. However, only a few studies had optimistic outcomes for glaucoma detection and staging. Moreover, the automated AI system still faces challenges in diagnosing at the clinicians’ level due to the lack of interpretability of the ML algorithms and integration of multiple clinical data. AI technology would be welcomed by doctors and patients if the "black box" notion is overcome by developing an explainable, transparent AI system with similar pathological markers used by clinicians as the sign of early detection and progression of glaucomatous damage.
Therefore, the thesis aimed to develop a comprehensive AI model to detect and stage glaucoma by incorporating a variety of clinical data and utilising advanced data analysis and machine learning (ML) techniques.
The research first focuses on optimising glaucoma diagnostic features by combining structural, functional, demographic, risk factor, and optical coherence tomography (OCT) features. The significant features were evaluated using statistical analysis and trained in ML algorithms to observe the detection performance. Three crucial structural ONH OCT features: cross-sectional 2D radial B-scan, 3D vascular angiography and temporal-superior-nasal-inferior-temporal (TSNIT) B-scan, were analysed and trained in explainable deep learning (DL) models for automated glaucoma prediction. The explanation behind the decision making of DL models were successfully demonstrated using the feature visualisation. The structural features or distinguished affected regions of TSNIT OCT scans were precisely localised for glaucoma patients. This is consistent with the concept of explainable DL, which refers to the idea of making the decision-making processes of DL models transparent and interpretable to humans. However, artifacts and speckle noise often result in misinterpretation of the TSNIT OCT scans. This research also developed an automated DL model to remove the artifacts and noise from the OCT scans, facilitating error-free retinal layers segmentation, accurate tissue thickness estimation and image interpretation.
Moreover, to monitor and grade glaucoma severity, the visual field (VF) test is commonly followed by clinicians for treatment and management. Therefore, this research uses the functional features extracted from VF images to train ML algorithms for staging glaucoma from early to advanced/severe stages.
Finally, the selected significant features were used to design and develop a comprehensive AI model to detect and grade glaucoma stages based on the data quantity and availability. In the first stage, a DL model was trained with TSNIT OCT scans, and its output was combined with significant structural and functional features and trained in ML models. The best-performed ML model achieved an area under the curve (AUC): 0.98, an accuracy of 97.2%, a sensitivity of 97.9%, and a specificity of 96.4% for detecting glaucoma. The model achieved an overall accuracy of 90.7% and an F1 score of 84.0% for classifying normal, early, moderate, and advanced-stage glaucoma.
In conclusion, this thesis developed and proposed a comprehensive, evidence-based AI model that will solve the screening problem for large populations and relieve experts from manually analysing a slew of patient data and associated misinterpretation problems. Moreover, this thesis demonstrated three structural OCT features that could be added as excellent diagnostic markers for precise glaucoma diagnosis
Identificación automática de nervios periféricos usando técnicas de aprendizaje automático y modelos de forma y apariencia
En este trabajo se presenta una herramienta para la segmentación automática de estructuras nerviosas en imágenes de ultrasonido, la cual tiene como finalidad servir como asistencia para los anestesiólogos en procedimientos de bloqueo de nervios periféricos o PNB por sus siglas en inglés (Peripheral Nerve Blocking). La idea principal de este trabajo es automatizar un modelo de forma y apariencia, el cual requiere de la inicialización por parte de un experto. Esta automatización se lleva a cabo a partir de un modelo de clasificación basado en máquinas de soporte vectorial (SVM), el cual define de manera automática una región de interés (ROI) donde encuentra una estructura nerviosa. Esta ROI es posteriormente usada para la inicialización del modelo de forma y apariencia nombrado anteriormente. La metodología propuesta es probada sobre una base de datos compuesta por imágenes de ultrasonido correspondientes a los nervios cubital y mediano. Los resultados obtenidos comprueban que la metodología propuesta permite de manera automática identificar estructuras nerviosas en imágenes de ultrasonido
Improving statistical power of glaucoma clinical trials using an ensemble of cyclical generative adversarial networks
Albeit spectral-domain OCT (SDOCT) is now in clinical use for glaucoma management, published clinical trials relied on time-domain OCT (TDOCT) which is characterized by low signal-to-noise ratio, leading to low statistical power. For this reason, such trials require large numbers of patients observed over long intervals and become more costly. We propose a probabilistic ensemble model and a cycle-consistent perceptual loss for improving the statistical power of trials utilizing TDOCT. TDOCT are converted to synthesized SDOCT and segmented via Bayesian fusion of an ensemble of GANs. The final retinal nerve fibre layer segmentation is obtained automatically on an averaged synthesized image using label fusion. We benchmark different networks using i) GAN, ii) Wasserstein GAN (WGAN) (iii) GAN + perceptual loss and iv) WGAN + perceptual loss. For training and validation, an independent dataset is used, while testing is performed on the UK Glaucoma Treatment Study (UKGTS), i.e. a TDOCT-based trial. We quantify the statistical power of the measurements obtained with our method, as compared with those derived from the original TDOCT. The results provide new insights into the UKGTS, showing a significantly better separation between treatment arms, while improving the statistical power of TDOCT on par with visual field measurements
- …