3,895 research outputs found

    Functional respiratory imaging : opening the black box

    Get PDF
    In respiratory medicine, several quantitative measurement tools exist that assist the clinicians in their diagnosis. The main issue with these traditional techniques is that they lack sensitivity to detect changes and that the variation between different measurements is very high. The result is that the development of respiratory drugs is the most expensive of all drug development. This limits innovation, resulting in an unmet need for sensitive quantifiable outcome parameters in pharmacological development and clinical respiratory practice. In this thesis, functional respiratory imaging (FRI) is proposed as a tool to tackle these issues. FRI is a workflow where patient specific medical images are combined with computational fluid dynamics in order to give patient specific local information of anatomy and functionality in the respiratory system. A robust high throughput automation system is designed in order get a workflow that is of a high quality, consistent and fast. This makes it possible to apply this technology on large datasets as typically seen in clinical trials. FRI is performed on 486 unique geometries of patients with various pathologies such as asthma, chronic obstructive lung disease, sleep apnea and cystic fibrosis. This thesis shows that FRI can have an added value in multiple research domains. The high sensitivity and specificity of FRI make it very well suited as a tool to make decisions early in the development process of a device or drug. Furthermore, FRI also seems to be an interesting technology to gain better insight in rare diseases and can possibly be useful in personalized medicine

    IARS SegNet: Interpretable Attention Residual Skip connection SegNet for melanoma segmentation

    Full text link
    Skin lesion segmentation plays a crucial role in the computer-aided diagnosis of melanoma. Deep Learning models have shown promise in accurately segmenting skin lesions, but their widespread adoption in real-life clinical settings is hindered by their inherent black-box nature. In domains as critical as healthcare, interpretability is not merely a feature but a fundamental requirement for model adoption. This paper proposes IARS SegNet an advanced segmentation framework built upon the SegNet baseline model. Our approach incorporates three critical components: Skip connections, residual convolutions, and a global attention mechanism onto the baseline Segnet architecture. These elements play a pivotal role in accentuating the significance of clinically relevant regions, particularly the contours of skin lesions. The inclusion of skip connections enhances the model's capacity to learn intricate contour details, while the use of residual convolutions allows for the construction of a deeper model while preserving essential image features. The global attention mechanism further contributes by extracting refined feature maps from each convolutional and deconvolutional block, thereby elevating the model's interpretability. This enhancement highlights critical regions, fosters better understanding, and leads to more accurate skin lesion segmentation for melanoma diagnosis.Comment: Submitted to the journal: Computers in Biology and Medicin

    Explainable artificial intelligence toward usable and trustworthy computer-aided diagnosis of multiple sclerosis from Optical Coherence Tomography

    Get PDF
    Background: Several studies indicate that the anterior visual pathway provides information about the dynamics of axonal degeneration in Multiple Sclerosis (MS). Current research in the field is focused on the quest for the most discriminative features among patients and controls and the development of machine learning models that yield computer-aided solutions widely usable in clinical practice. However, most studies are conducted with small samples and the models are used as black boxes. Clinicians should not trust machine learning decisions unless they come with comprehensive and easily understandable explanations. Materials and methods: A total of 216 eyes from 111 healthy controls and 100 eyes from 59 patients with relapsing-remitting MS were enrolled. The feature set was obtained from the thickness of the ganglion cell layer (GCL) and the retinal nerve fiber layer (RNFL). Measurements were acquired by the novel Posterior Pole protocol from Spectralis Optical Coherence Tomography (OCT) device. We compared two black-box methods (gradient boosting and random forests) with a glass-box method (explainable boosting machine). Explainability was studied using SHAP for the black-box methods and the scores of the glass-box method. Results: The best-performing models were obtained for the GCL layer. Explainability pointed out to the temporal location of the GCL layer that is usually broken or thinning in MS and the relationship between low thickness values and high probability of MS, which is coherent with clinical knowledge.Conclusions: The insights on how to use explainability shown in this work represent a first important step toward a trustworthy computer-aided solution for the diagnosis of MS with OCT

    Overview: Computer vision and machine learning for microstructural characterization and analysis

    Full text link
    The characterization and analysis of microstructure is the foundation of microstructural science, connecting the materials structure to its composition, process history, and properties. Microstructural quantification traditionally involves a human deciding a priori what to measure and then devising a purpose-built method for doing so. However, recent advances in data science, including computer vision (CV) and machine learning (ML) offer new approaches to extracting information from microstructural images. This overview surveys CV approaches to numerically encode the visual information contained in a microstructural image, which then provides input to supervised or unsupervised ML algorithms that find associations and trends in the high-dimensional image representation. CV/ML systems for microstructural characterization and analysis span the taxonomy of image analysis tasks, including image classification, semantic segmentation, object detection, and instance segmentation. These tools enable new approaches to microstructural analysis, including the development of new, rich visual metrics and the discovery of processing-microstructure-property relationships.Comment: submitted to Materials and Metallurgical Transactions

    Automated classification of retinopathy of prematurity in newborns

    Get PDF
    La Retinopatia de l'Prematur (ROP) és una malaltia que afecta els nadons prematurs mostrant-se com un subdesenvolupament dels vasos retinians. El diagnòstic precoç d'aquesta malaltia és un tot un repte ja que requereix de professionals altament qualificats amb coneixements molt específics. Actualment a Espanya, només uns pocs hospitals compten amb els equipaments especialitzats per al tractament i diagnòstic d'aquesta patologia. Aquest projecte final de màster, té com a objectiu final desenvolupar una eina preliminar per a la classificació de l'extensió aquesta malaltia. Aquesta applicació, ha estat disenyada per a ser integrada en una plataforma de suport a la diagnosi de la Retinopatia i poder evaluar la malaltia, proporcionant informació detallada sobre les imatge analitzades. Aquest projecte, també estableix les bases per a la comparació entre l'enfocament clínic, que utilitzen els metges, i la naturalesa "Black-Box" natural de la Xarxa Neuronal Artificial per classificar l'extensió de la malaltia. L'algoritme desenvolupat és capaç de: segmentar els vasos oculars utilitzant una xarxa neuronal convolucional U-Net; extreure les característiques representatives de la malaltia a partir de la segmentació; i classificar aquestes característiques en casos ROP i casos ROP Plus, mitjançant l'ús d'una gamma de classificadors. Les principals característiques analitzades són la tortuositat i el gruix dels vasos, indicadors de la malaltia emprats pels patolegs experts. La xarxa de segmentació ha obtingut una precisió global de l'96,15%. Els resultats dels diferents classificadors indiquen un trade-off entre la precisió i el volum d'imatges analitzades. S'ha obtingut una precisió de l'100% emprant un classificador de doble threshold en el analisis de l'12,5% de les imatges. En canvi, mitjançant l'ús d'un classificador "decision tree", s'ha obtingut una precisió del 70,8% analitzant el 100% de les imatges.La Retinopatía del Prematuro (ROP) es una enfermedad que afecta a los bebés prematuros mostrándose como el subdesarrollo de los vasos retinianos. El diagnóstico precoz de dicha enfermedad es un desafío ya que requiere de profesionales altamente capacitados con conocimientos muy específicos. Actualmente en España, solo unos pocos hospitales están dotados con los equipamientos especializados para el tratamiento y diagnóstico de esta patología Este proyecto final de master, tiene como objetivo final desarrollar una herramienta preliminar para la clasificación de la extensión dicha enfermedad. Esta aplicación, ha sido diseñada para ser integrada en una plataforma de soporte al diagnóstico de la Retinopatía y evaluar la enfermedad, proporcionando información detallada sobre las imágenes analizadas. Este proyecto también sienta las bases para la comparación entre el enfoque clínico, que utilizan los médicos, y la naturaleza "Black-Box" natural de la Red Neuronal Artificial para clasificar la extensión de la enfermedad. El algoritmo desarrollado es capaz de: segmentar los vasos oculares utilizando una red neuronal convolucional U-Net; extraer las características representativas de la enfermedad a partir de la segmentación; y clasificar estas características en casos ROP y casos ROP Plus, mediante el empleo de una gama de clasificadores. Las principales características analizadas son la tortuosidad y el grosor de los vasos, indicadores cauterizantes de la enfermedad empleados por los patólogos expertos. La red de segmentación ha logrado una precisión global del 96,15%. Los resultados de los diferentes clasificadores indican un trade-off entre la precisión y el volumen de imágenes analizadas. Se ha obtenido una precisión del 100% empleando un clasificador de doble threshold en el análisis del 12,5% de las imágenes. En cambio, mediante el uso de un clasificador “decision tree”, se ha obtenido una precisión del 70,8% analizando el 100% de las imágenes.Retinopathy of Prematurity (ROP) is a disease in preterm babies with underdevelopment in retinal vessels. Early diagnosis of the disease is challenging and requires skilled professionals with very specific knowledge. Currently, in Spain, only a few hospitals have departments specialized in this pathology and, therefore, are able to diagnose and treat it accordingly. This master project aims to develop the first preliminary instrument for the classification of the extent of Retinopathy disease. This tool has been built to be integrated into a diagnostic support platform to detect the presence of retinopathy and evaluate the sickness, providing insightful information regarding the specific image. This project also lays the base for the comparison between the clinical approach that the doctors use and the “black box” approach the Artificial Neural Network uses to predict the extent of the disease. The developed algorithm is able to: segment ocular vessels using a U-Net Convolutional Neural Network; extract the critical features from the segmentation; and classify those features into ROP cases and ROP Plus cases by employing a range of different classifiers. The main features analyzed by the related specialists and thus selected are tortuosity and thickness of the vessels. The segmentation Network achieved a global accuracy of 96.15%. The results of the different classifiers indicate a trade-off between accuracy and the volume of computed images. An accuracy of 100% was achieved with a Double Threshold classifier on 12.5% of the images. Instead, by using a Decision tree classifier, an accuracy of 70.8% was achieved when computing 100% of the images

    Visualization system requirements for data processing pipeline design and optimization

    Get PDF
    The rising quantity and complexity of data creates a need to design and optimize data processing pipelines – the set of data processing steps, parameters and algorithms that perform operations on the data. Visualization can support this process but, although there are many examples of systems for visual parameter analysis, there remains a need to systematically assess users’ requirements and match those requirements to exemplar visualization methods. This article presents a new characterization of the requirements for pipeline design and optimization. This characterization is based on both a review of the literature and first-hand assessment of eight application case studies. We also match these requirements with exemplar functionality provided by existing visualization tools. Thus, we provide end-users and visualization developers with a way of identifying functionality that addresses data processing problems in an application. We also identify seven future challenges for visualization research that are not met by the capabilities of today’s systems

    CAD-Based Porous Scaffold Design of Intervertebral Discs in Tissue Engineering

    Get PDF
    With the development and maturity of three-dimensional (3D) printing technology over the past decade, 3D printing has been widely investigated and applied in the field of tissue engineering to repair damaged tissues or organs, such as muscles, skin, and bones, Although a number of automated fabrication methods have been developed to create superior bio-scaffolds with specific surface properties and porosity, the major challenges still focus on how to fabricate 3D natural biodegradable scaffolds that have tailor properties such as intricate architecture, porosity, and interconnectivity in order to provide the needed structural integrity, strength, transport, and ideal microenvironment for cell- and tissue-growth. In this dissertation, a robust pipeline of fabricating bio-functional porous scaffolds of intervertebral discs based on different innovative porous design methodologies is illustrated. Firstly, a triply periodic minimal surface (TPMS) based parameterization method, which has overcome the integrity problem of traditional TPMS method, is presented in Chapter 3. Then, an implicit surface modeling (ISM) approach using tetrahedral implicit surface (TIS) is demonstrated and compared with the TPMS method in Chapter 4. In Chapter 5, we present an advanced porous design method with higher flexibility using anisotropic radial basis function (ARBF) and volumetric meshes. Based on all these advanced porous design methods, the 3D model of a bio-functional porous intervertebral disc scaffold can be easily designed and its physical model can also be manufactured through 3D printing. However, due to the unique shape of each intervertebral disc and the intricate topological relationship between the intervertebral discs and the spine, the accurate localization and segmentation of dysfunctional discs are regarded as another obstacle to fabricating porous 3D disc models. To that end, we discuss in Chapter 6 a segmentation technique of intervertebral discs from CT-scanned medical images by using deep convolutional neural networks. Additionally, some examples of applying different porous designs on the segmented intervertebral disc models are demonstrated in Chapter 6

    Opening the black-box of artificial intelligence predictions on clinical decision support systems

    Get PDF
    Cardiovascular diseases are the leading global death cause. Their treatment and prevention rely on electrocardiogram interpretation, which is dependent on the physician’s variability. Subjectiveness is intrinsic to electrocardiogram interpretation and hence, prone to errors. To assist physicians in making precise and thoughtful decisions, artificial intelligence is being deployed to develop models that can interpret extent datasets and provide accurate decisions. However, the lack of interpretability of most machine learning models stands as one of the drawbacks of their deployment, particularly in the medical domain. Furthermore, most of the currently deployed explainable artificial intelligence methods assume independence between features, which means temporal independence when dealing with time series. The inherent characteristic of time series cannot be ignored as it carries importance for the human decision making process. This dissertation focuses on the explanation of heartbeat classification using several adaptations of state-of-the-art model-agnostic methods, to locally explain time series classification. To address the explanation of time series classifiers, a preliminary conceptual framework is proposed, and the use of the derivative is suggested as a complement to add temporal dependency between samples. The results were validated on an extent public dataset, through the 1-D Jaccard’s index, which consists of the comparison of the subsequences extracted from an interpretable model and the explanation methods used. Secondly, through the performance’s decrease, to evaluate whether the explanation fits the model’s behaviour. To assess models with distinct internal logic, the validation was conducted on a more transparent model and more opaque one in both binary and multiclass situation. The results show the promising use of including the signal’s derivative to introduce temporal dependency between samples in the explanations, for models with simpler internal logic.As doenças cardiovasculares são, a nível mundial, a principal causa de morte e o seu tratamento e prevenção baseiam-se na interpretação do electrocardiograma. A interpretação do electrocardiograma, feita por médicos, é intrinsecamente subjectiva e, portanto, sujeita a erros. De modo a apoiar a decisão dos médicos, a inteligência artificial está a ser usada para desenvolver modelos com a capacidade de interpretar extensos conjuntos de dados e fornecer decisões precisas. No entanto, a falta de interpretabilidade da maioria dos modelos de aprendizagem automática é uma das desvantagens do recurso à mesma, principalmente em contexto clínico. Adicionalmente, a maioria dos métodos inteligência artifical explicável assumem independência entre amostras, o que implica a assunção de independência temporal ao lidar com séries temporais. A característica inerente das séries temporais não pode ser ignorada, uma vez que apresenta importância para o processo de tomada de decisão humana. Esta dissertação baseia-se em inteligência artificial explicável para tornar inteligível a classificação de batimentos cardíacos, através da utilização de várias adaptações de métodos agnósticos do estado-da-arte. Para abordar a explicação dos classificadores de séries temporais, propõe-se uma taxonomia preliminar, e o uso da derivada como um complemento para adicionar dependência temporal entre as amostras. Os resultados foram validados para um conjunto extenso de dados públicos, por meio do índice de Jaccard em 1-D, com a comparação das subsequências extraídas de um modelo interpretável e os métodos inteligência artificial explicável utilizados, e a análise de qualidade, para avaliar se a explicação se adequa ao comportamento do modelo. De modo a avaliar modelos com lógicas internas distintas, a validação foi realizada usando, por um lado, um modelo mais transparente e, por outro, um mais opaco, tanto numa situação de classificação binária como numa situação de classificação multiclasse. Os resultados mostram o uso promissor da inclusão da derivada do sinal para introduzir dependência temporal entre as amostras nas explicações fornecidas, para modelos com lógica interna mais simples
    corecore