116 research outputs found

    Uncertainty modeling and interpretability in convolutional neural networks for polyp segmentation

    Get PDF
    Convolutional Neural Networks (CNNs) are propelling advances in a range of different computer vision tasks such as object detection and object segmentation. Their success has motivated research in applications of such models for medical image analysis. If CNN-based models are to be helpful in a medical context, they need to be precise, interpretable, and uncertainty in predictions must be well understood. In this paper, we develop and evaluate recent advances in uncertainty estimation and model interpretability in the context of semantic segmentation of polyps from colonoscopy images. We evaluate and enhance several architectures of Fully Convolutional Networks (FCNs) for semantic segmentation of colorectal polyps and provide a comparison between these models. Our highest performing model achieves a 76.06% mean IOU accuracy on the EndoScene dataset, a considerable improvement over the previous state-of-the-art

    Drivers and Barriers for Industry 4.0 Readiness and Practice: A SME Perspective with Empirical Evidence

    Get PDF
    The technological development is moving rapidly enabling manufacturing companies with new possibilities for digital transformations to offer products and services to current and new markets at competitive costs. Such modern technologies are, among others, discussed under the umbrella term Industry 4.0. This paper reports on the results of a questionnaire-survey of 308 small and medium-sized manufacturers about their readiness for digitalized manufacturing and their actual practice in this area. The paper provides empirical evidence for that perceived drivers for Industry 4.0 lead to increased Industry 4.0 readiness, which, in turn, leads to a higher degree of practicing Industry 4.0. The paper also finds that barriers make companies less Industry 4.0 ready but this apparently does not have any significant impact on Industry 4.0 practice. The results are of importance for companies in planning transformation processes towards digitalized processes

    Knowledge networks for adoption of additive manufacturing: The role of maturity

    Get PDF
    Additive manufacturing (AM) has had a significant impact on manufacturing processes in many industries. The implementation of AM technology, however, involves several knowledge-related challenges, particularly for small and medium-sized enterprises (SMEs). We explore this topic by developing a theoretical model with the hypotheses that acquiring AM knowledge from networks is associated with competitive advantages from AM, and that this relationship can partly be explained by AM maturity. We test our model through a survey of Danish manufacturing SMEs. The findings show that AM knowledge acquisition from networks is positively associated with competitive advantages from AM, where around 40 percent of this relationship is explained by higher AM maturity. Furthermore, the findings suggest that different types of knowledge networks have different effects on AM maturity and competitive advantages from AM

    View it like a radiologist: Shifted windows for deep learning augmentation of CT images

    Get PDF
    Deep learning has the potential to revolutionize medical practice by automating and performing important tasks like detecting and delineating the size and locations of cancers in medical images. However, most deep learning models rely on augmentation techniques that treat medical images as natural images. For contrast-enhanced Computed Tomography (CT) images in particular, the signals producing the voxel intensities have physical meaning, which is lost during preprocessing and augmentation when treating such images as natural images. To address this, we propose a novel preprocessing and intensity augmentation scheme inspired by how radiologists leverage multiple viewing windows when evaluating CT images. Our proposed method, window shifting, randomly places the viewing windows around the region of interest during training. This approach improves liver lesion segmentation performance and robustness on images with poorly timed contrast agent. Our method outperforms classical intensity augmentations as well as the intensity augmentation pipeline of the popular nn-UNet on multiple datasets

    Uncertainty-Aware Deep Ensembles for Reliable and Explainable Predictions of Clinical Time Series

    Get PDF
    Deep learning-based support systems have demonstrated encouraging results in numerous clinical applications involving the processing of time series data. While such systems often are very accurate, they have no inherent mechanism for explaining what influenced the predictions, which is critical for clinical tasks. However, existing explainability techniques lack an important component for trustworthy and reliable decision support, namely a notion of uncertainty. In this paper, we address this lack of uncertainty by proposing a deep ensemble approach where a collection of DNNs are trained independently. A measure of uncertainty in the relevance scores is computed by taking the standard deviation across the relevance scores produced by each model in the ensemble, which in turn is used to make the explanations more reliable. The class activation mapping method is used to assign a relevance score for each time step in the time series. Results demonstrate that the proposed ensemble is more accurate in locating relevant time steps and is more consistent across random initializations, thus making the model more trustworthy. The proposed methodology paves the way for constructing trustworthy and dependable support systems for processing clinical time series for healthcare related tasks.Comment: 11 pages, 9 figures, code at https://github.com/Wickstrom/TimeSeriesXA

    Simulation og Projektstyring: Fra madpapir til SimCity?

    Get PDF
    Mens simulation i første omgang har været koncentreret omkring relativt afgrænsede fænomener og specielt har været rettet mod ingeniørvidenskaberne og den tekniske del af produktudviklingen, så har computeren gjort det muligt at simulere mere komplekse fænomener. Dette har betydet, at simulation på det seneste har gjort sit indtog som et værktøj til at øge forståelsen af komplekse sociale systemer. Et af de seneste skud på stammen af simulationsværktøjer er projektstyringsværktøjet “The Virtual Design Team” (VDT), udviklet på Center for Integrated Engineering (CIFE) på Stanford University. VDTtankegangen er, at det bør være muligt at designe projekter på samme måde, som man designer bygninger. VDT simulerer gennemførelsen af projekter ud fra oplysninger om opgavens natur og sammensætning samt karakteristika for projektdeltagerne og projektorganisationen. Herunder simuleres beslutnings- og koordinationsprocessernes indflydelse på et eller flere simultane projekters gennemførelse. Denne artikel ser nærmere på udviklingen i simulationsværktøjernes kompleksitet samt retter fokus mod softwarepakken “SimVision”, der er den senest kommercialiserede udgave af VDT

    A clinically motivated self-supervised approach for content-based image retrieval of CT liver images

    Get PDF
    Deep learning-based approaches for content-based image retrieval (CBIR) of computed tomography (CT) liver images is an active field of research, but suffer from some critical limitations. First, they are heavily reliant on labeled data, which can be challenging and costly to acquire. Second, they lack transparency and explainability, which limits the trustworthiness of deep CBIR systems. We address these limitations by: (1) Proposing a self-supervised learning framework that incorporates domain-knowledge into the training procedure, and, (2) by providing the first representation learning explainability analysis in the context of CBIR of CT liver images. Results demonstrate improved performance compared to the standard self-supervised approach across several metrics, as well as improved generalization across datasets. Further, we conduct the first representation learning explainability analysis in the context of CBIR, which reveals new insights into the feature extraction process. Lastly, we perform a case study with cross-examination CBIR that demonstrates the usability of our proposed framework. We believe that our proposed framework could play a vital role in creating trustworthy deep CBIR systems that can successfully take advantage of unlabeled data

    The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus

    Full text link
    Explainable AI (XAI) is a rapidly evolving field that aims to improve transparency and trustworthiness of AI systems to humans. One of the unsolved challenges in XAI is estimating the performance of these explanation methods for neural networks, which has resulted in numerous competing metrics with little to no indication of which one is to be preferred. In this paper, to identify the most reliable evaluation method in a given explainability context, we propose MetaQuantus -- a simple yet powerful framework that meta-evaluates two complementary performance characteristics of an evaluation method: its resilience to noise and reactivity to randomness. We demonstrate the effectiveness of our framework through a series of experiments, targeting various open questions in XAI, such as the selection of explanation methods and optimisation of hyperparameters of a given metric. We release our work under an open-source license to serve as a development tool for XAI researchers and Machine Learning (ML) practitioners to verify and benchmark newly constructed metrics (i.e., ``estimators'' of explanation quality). With this work, we provide clear and theoretically-grounded guidance for building reliable evaluation methods, thus facilitating standardisation and reproducibility in the field of XAI.Comment: 30 pages, 12 figures, 3 table
    corecore