64 research outputs found

    Graph-Ensemble Learning Model for Multi-label Skin Lesion Classification using Dermoscopy and Clinical Images

    Full text link
    Many skin lesion analysis (SLA) methods recently focused on developing a multi-modal-based multi-label classification method due to two factors. The first is multi-modal data, i.e., clinical and dermoscopy images, which can provide complementary information to obtain more accurate results than single-modal data. The second one is that multi-label classification, i.e., seven-point checklist (SPC) criteria as an auxiliary classification task can not only boost the diagnostic accuracy of melanoma in the deep learning (DL) pipeline but also provide more useful functions to the clinical doctor as it is commonly used in clinical dermatologist's diagnosis. However, most methods only focus on designing a better module for multi-modal data fusion; few methods explore utilizing the label correlation between SPC and skin disease for performance improvement. This study fills the gap that introduces a Graph Convolution Network (GCN) to exploit prior co-occurrence between each category as a correlation matrix into the DL model for the multi-label classification. However, directly applying GCN degraded the performances in our experiments; we attribute this to the weak generalization ability of GCN in the scenario of insufficient statistical samples of medical data. We tackle this issue by proposing a Graph-Ensemble Learning Model (GELN) that views the prediction from GCN as complementary information of the predictions from the fusion model and adaptively fuses them by a weighted averaging scheme, which can utilize the valuable information from GCN while avoiding its negative influences as much as possible. To evaluate our method, we conduct experiments on public datasets. The results illustrate that our GELN can consistently improve the classification performance on different datasets and that the proposed method can achieve state-of-the-art performance in SPC and diagnosis classification.Comment: Submitted to TNNLS in 1st July 202

    Equilibrium: Optimization of Ceph Cluster Storage by Size-Aware Shard Balancing

    Full text link
    Worldwide, storage demands and costs are increasing. As a consequence of fault tolerance, storage device heterogenity, and data center specific constraints, optimal storage capacity utilization cannot be achieved with the integrated balancing algorithm of the distributed storage cluster system Ceph. This work presents Equilibrium, a device utilization size-aware shard balancing algorithm. With extensive experiments we demonstrate that our proposed algorithm balances near optimally on real-world clusters with strong available storage capacity improvements while reducing the amount of needed data movement.Comment: source code: https://github.com/TheJJ/ceph-balance

    Spherical acquisition trajectories for X-ray computed tomography with a robotic sample holder

    Full text link
    This work presents methods for the seamless execution of arbitrary spherical trajectories with a seven-degree-of-freedom robotic arm as a sample holder. The sample holder is integrated into an existing X-ray computed tomography setup. We optimized the path planning and robot control algorithms for the seamless execution of spherical trajectories. A precision-manufactured sample holder part is attached to the robotic arm for the calibration procedure. Different designs of this part are tested and compared to each other for optimal coverage of trajectories and reconstruction image quality. We present experimental results with the robotic sample holder where a sample measurement on a spherical trajectory achieves improved reconstruction quality compared to a conventional circular trajectory. Our results demonstrate the superiority of the discussed system as it outperforms single-axis systems by reaching nearly 82\% of all possible rotations. The proposed system is a step towards higher image reconstruction quality in flexible X-ray CT systems. It will enable reduced scan times and radiation dose exposure with task-specific trajectories in the future, as it can capture information from various sample angles

    Transformer-based interpretable multi-modal data fusion for skin lesion classification

    Full text link
    A lot of deep learning (DL) research these days is mainly focused on improving on quantitative metrics regardless of other factors. In human centered applications, like skin lesion classification in dermatology, DL-driven clinical decision support systems are still in their infancy due to the limited transparency of their decision-making process. Moreover, the lack of procedures that can explain the behavior of trained DL algorithms leads to almost no trust from the clinical physicians. To diagnose skin lesions, dermatologists rely on both visual assessment of the disease and the data gathered from the anamnesis of the patient. Data-driven algorithms dealing with multi-modal data are limited by the separation of feature-level and decision-level fusion procedures required by convolutional architectures. To address this issue, we enable single-stage multi-modal data fusion via the attention mechanism of transformer-based architectures to aid in the diagnosis of skin diseases. Our method beats other state-of-the-art single- and multi-modal DL architectures in both image rich and patient-data rich environments. Additionally, the choice of the architecture enables native interpretability support for the classification task both in image and metadata domain with no additional modifications necessary.Comment: Submitted to IEEE TMI in March 202

    A knee cannot have lung disease: out-of-distribution detection with in-distribution voting using the medical example of chest X-ray classification

    Full text link
    Deep learning models are being applied to more and more use cases with astonishing success stories, but how do they perform in the real world? To test a model, a specific cleaned data set is assembled. However, when deployed in the real world, the model will face unexpected, out-of-distribution (OOD) data. In this work, we show that the so-called "radiologist-level" CheXnet model fails to recognize all OOD images and classifies them as having lung disease. To address this issue, we propose in-distribution voting, a novel method to classify out-of-distribution images for multi-label classification. Using independent class-wise in-distribution (ID) predictors trained on ID and OOD data we achieve, on average, 99 % ID classification specificity and 98 % sensitivity, improving the end-to-end performance significantly compared to previous works on the chest X-ray 14 data set. Our method surpasses other output-based OOD detectors even when trained solely with ImageNet as OOD data and tested with X-ray OOD images.Comment: Code available at https://gitlab.lrz.de/IP/a-knee-cannot-have-lung-diseas

    Exploring the Impact of Image Resolution on Chest X-ray Classification Performance

    Full text link
    Deep learning models for image classification have often used a resolution of 224×224224\times224 pixels for computational reasons. This study investigates the effect of image resolution on chest X-ray classification performance, using the ChestX-ray14 dataset. The results show that a higher image resolution, specifically 1024×10241024\times1024 pixels, has the best overall classification performance, with a slight decline in performance between 256×256256\times256 to 512×512512\times512 pixels for most of the pathological classes. Comparison of saliency map-generated bounding boxes revealed that commonly used resolutions are insufficient for finding most pathologies

    WindowNet: Learnable Windows for Chest X-ray Classification

    Full text link
    Chest X-ray (CXR) images are commonly compressed to a lower resolution and bit depth to reduce their size, potentially altering subtle diagnostic features. Radiologists use windowing operations to enhance image contrast, but the impact of such operations on CXR classification performance is unclear. In this study, we show that windowing can improve CXR classification performance, and propose WindowNet, a model that learns optimal window settings. We first investigate the impact of bit-depth on classification performance and find that a higher bit-depth (12-bit) leads to improved performance. We then evaluate different windowing settings and show that training with a distinct window generally improves pathology-wise classification performance. Finally, we propose and evaluate WindowNet, a model that learns optimal window settings, and show that it significantly improves performance compared to the baseline model without windowing

    Object Specific Trajectory Optimization for Industrial X-ray Computed Tomography

    Get PDF
    In industrial settings, X-ray computed tomography scans are a common tool for inspection of objects. Often the object can not be imaged using standard circular or helical trajectories because of constraints in space or time. Compared to medical applications the variance in size and materials is much larger. Adapting the acquisition trajectory to the object is beneficial and sometimes inevitable. There are currently no sophisticated methods for this adoption. Typically the operator places the object according to his best knowledge. We propose a detectability index based optimization algorithm which determines the scan trajectory on the basis of a CAD-model of the object. The detectability index is computed solely from simulated projections for multiple user defined features. By adapting the features the algorithm is adapted to different imaging tasks. Performance of simulated and measured data was qualitatively and quantitatively assessed. The results illustrate that our algorithm not only allows more accurate detection of features, but also delivers images with high overall quality in comparison to standard trajectory reconstructions. This work enables to reduce the number of projections and in consequence scan time by introducing an optimization algorithm to compose an object specific trajectory

    What about computational super-resolution in fluorescence Fourier light field microscopy?

    Get PDF
    Recently, Fourier light field microscopy was proposed to overcome the limitations in conventional light field microscopy by placing a micro-lens array at the aperture stop of the microscope objective instead of the image plane. In this way, a collection of orthographic views from different perspectives are directly captured. When inspecting fluorescent samples, the sensitivity and noise of the sensors are a major concern and large sensor pixels are required to cope with low-light conditions, which implies under-sampling issues. In this context, we analyze the sampling patterns in Fourier light field microscopy to understand to what extent computational super-resolution can be triggered during deconvolution in order to improve the resolution of the 3D reconstruction of the imaged data
    corecore