9 research outputs found

    MEDFAIR: Benchmarking Fairness for Medical Imaging

    Get PDF
    A multitude of work has shown that machine learning-based medical diagnosis systems can be biased against certain subgroups of people. This has motivated a growing number of bias mitigation algorithms that aim to address fairness issues in machine learning. However, it is difficult to compare their effectiveness in medical imaging for two reasons. First, there is little consensus on the criteria to assess fairness. Second, existing bias mitigation algorithms are developed under different settings, e.g., datasets, model selection strategies, backbones, and fairness metrics, making a direct comparison and evaluation based on existing results impossible. In this work, we introduce MEDFAIR, a framework to benchmark the fairness of machine learning models for medical imaging. MEDFAIR covers eleven algorithms from various categories, nine datasets from different imaging modalities, and three model selection criteria. Through extensive experiments, we find that the under-studied issue of model selection criterion can have a significant impact on fairness outcomes; while in contrast, state-of-the-art bias mitigation algorithms do not significantly improve fairness outcomes over empirical risk minimization (ERM) in both in-distribution and out-of-distribution settings. We evaluate fairness from various perspectives and make recommendations for different medical application scenarios that require different ethical principles. Our framework provides a reproducible and easy-to-use entry point for the development and evaluation of future bias mitigation algorithms in deep learning. Code is available at https://github.com/ys-zong/MEDFAIR

    A Deep Segmentation Network of Stent Structs Based on IoT for Interventional Cardiovascular Diagnosis

    Full text link
    [EN] The Internet of Things (IoT) technology has been widely introduced to the existing medical system. An eHealth system based on IoT devices has gained widespread popularity. In this article, we propose an IoT eHealth framework to provide an autonomous solution for patients with interventional cardiovascular diseases. In this framework, wearable sensors are used to collect a patient's health data, which is daily monitored by a remote doctor. When the monitoring data is abnormal, the remote doctor will ask for image acquisition of the patient's cardiovascular internal conditions. We leverage edge computing to classify these training images by the local base classifier; thereafter, pseudo-labels are generated according to its output. Moreover, a deep segmentation network is leveraged for the segmentation of stent structs in intravascular optical coherence tomography and intravenous ultrasound images of patients. The experimental results demonstrate that remote and local doctors perform real-time visual communication to complete telesurgery. In the experiments, we adopt the U-net backbone with a pretrained SeResNet34 as the encoder to segment the stent structs. Meanwhile, a series of comparative experiments have been conducted to demonstrate the effectiveness of our method based on accuracy, sensitivity, Jaccard, and dice.This work was supported by the National Key Research and Development Program of China (Grant no. 2020YFB1313703), the National Natural Science Foundation of China (Grant no. 62002304), and the Natural Science Foundation of Fujian Province of China (Grant no. 2020J05002).Huang, C.; Zong, Y.; Chen, J.; Liu, W.; Lloret, J.; Mukherjee, M. (2021). A Deep Segmentation Network of Stent Structs Based on IoT for Interventional Cardiovascular Diagnosis. IEEE Wireless Communications. 28(3):36-43. https://doi.org/10.1109/MWC.001.2000407S364328

    A new deep learning approach for the retinal hard exudates detection based on superpixel multi-feature extraction and patch-based CNN

    Get PDF
    Diabetic Retinopathy (DR) is a severe complication of chronic diabetes causing significant visual deterioration and may lead to blindness with delay of being treated. Exudative diabetic maculopathy, a form of macular edema where hard exudates (HE) develop, is a frequent cause of visual deterioration in DR. The detection of HE comprises a significant role in the DR diagnosis. In this paper, an automatic exudates detection method based on superpixel multi-feature extraction and patch-based deep convolutional neural network is proposed. Firstly, superpixels, regarded as candidates, are generated on each resized image using the superpixel segmentation algorithm called Simple Linear Iterative Clustering (SLIC). Then, 25 features extracted from resized images and patches are generated on each feature. Patches are subsequently used to train a deep convolutional neural network, which distinguishes the hard exudates from the background. Experiments conducted on three publicly available datasets (DiaretDB1, e-ophtha EX and IDRiD) demonstrate that our proposed methodology achieved superior HE detection when compared with current state-of-art algorithms

    An Early Diagnosis of Oral Cancer based on Three-Dimensional Convolutional Neural Networks

    Get PDF
    Three-dimensional convolutional neural networks (3DCNNs), a rapidly evolving modality of deep learning, has gained popularity in many fields. For oral cancers, CT images are traditionally processed using two-dimensional input, without considering information between lesion slices. In this paper, we established a 3DCNNs-based image processing algorithm for the early diagnosis of oral cancers, which was compared with a 2DCNNs-based algorithm. The 3D and 2D CNNs were constructed using the same hierarchical structure to profile oral tumors as benign or malignant. Our results showed that 3DCNNs with dynamic characteristics of the enhancement rate image performed better than 2DCNNS with single enhancement sequence for the discrimination of oral cancer lesions. Our data indicate that spatial features and spatial dynamics extracted from 3DCNNs may inform future design of CT-assisted diagnosis system

    Moving Window Differential Evolution Independent Component Analysis-Based Operational Modal Analysis for Slow Linear Time-Varying Structures

    No full text
    In order to identify time-varying transient modal parameters only from nonstationary vibration response measurement signals for slow linear time-varying (SLTV) structures which are weakly damped, a moving window differential evolution (DE) independent component analysis- (ICA-) based operational modal analysis (OMA) method is proposed in this paper. Firstly, in order to overcome the problems in traditional ICA-based OMA, such as easy to go into local optima and difficult-to-identify high-order modal parameters, we combine DE with ICA and propose a differential evolution independent component analysis- (DEICA-) based OMA method for linear time invariant (LTI) structures. Secondly, we combine the moving widow technique with DEICA and propose a moving window differential evolution independent component analysis- (MWDEICA-) based OMA method for SLTV structures. The MWDEICA-based OMA method has high global searching ability, robustness, and complexity of time and space. The modal identification results in a three-degree-of-freedom structure with slow time-varying mass show that this MWDEICA-based OMA method can identify transient time-varying modal parameters effectively only from nonstationary vibration response measurement signals and has better performances than moving window traditional ICA-based OMA

    A new deep learning approach for the retinal hard exudates detection based on superpixel multi-feature extraction and patch-based CNN

    Get PDF
    Diabetic Retinopathy (DR) is a severe complication of chronic diabetes which causes significant visual deterioration and, when coupled with delayed treatment, may lead to blindness. Exudative diabetic maculopathy, a form of macular edema where hard exudates (HE) develop, is a frequent cause of visual deterioration in DR. The detection of HE comprises a significant role in the DR diagnosis. In this paper, an automatic exudates detection method based on superpixel multi-feature extraction and patch-based deep convolutional neural network is proposed. Firstly, candidate superpixels are generated on each resized image using the superpixel segmentation algorithm called Simple Linear Iterative Clustering (SLIC). Then, 25 features extracted from resized images and patches are generated on each feature. Patches are subsequently used to train a deep convolutional neural network, which distinguishes hard exudates from the background. Experiments conducted on three publicly available datasets (DiaretDB1, e-ophtha EX and IDRiD) demonstrate that our proposed methodology achieved superior HE detection when compared with current state-of-art algorithms

    Automatic detection approach for bioresorbable vascular scaffolds using a u-shaped convolutional neural network

    No full text
    Artificial stent implantation is one of the most effective ways to treat vascular diseases. However, commonly used metal stents have many negative effects, such as being difficult to remove and recover, whereas bio-absorbable stents have become the best way to treat vascular diseases because of their absorbability and harmlessness. It is very important in vascular medical imaging, such as optical coherence tomography (OCT), to be able to effectively track the position of stents in blood vessels. This task is undoubtedly labor-intensive, and it is inefficient to rely on experts to identify various scaffolds from medical images. In this paper, a novel automatic detection method for bioresorbable vascular scaffolds (BVSs) via a U-shaped convolutional neural network is developed. The method is composed of three steps: data preparation, network training, and network testing. First, in the data preparation step, we complete the task of labeling related samples based on expert experience, and then, these labeled OCT images are divided into the original and masked OCT images (corresponding to X and Y in supervised learning, respectively). Next, we train our data on a U-shaped convolutional neural network, which consists of five downsampling modules and four upsampling modules. We can obtain a related training model, which can be used to predict the related samples. In the testing stage, we can easily utilize the trained model to predict the input OCT data so that we can obtain the relevant information about a BVS in an OCT image. Obviously, this method can assist doctors in diagnosing the disease and in making important decisions. Finally, some experiments are performed to validate our proposed method, and the IoU criterion is used to measure the superiority of our proposed method. The results show that our proposed method is completely feasible and superior.Published versio

    Magnetic Resonance Image Denoising Algorithm Based on Cartoon, Texture, and Residual Parts

    Get PDF
    Magnetic resonance (MR) images are often contaminated by Gaussian noise, an electronic noise caused by the random thermal motion of electronic components, which reduces the quality and reliability of the images. This paper puts forward a hybrid denoising algorithm for MR images based on two sparsely represented morphological components and one residual part. To begin with, decompose a noisy MR image into the cartoon, texture, and residual parts by MCA, and then each part is denoised by using Wiener filter, wavelet hard threshold, and wavelet soft threshold, respectively. Finally, stack up all the denoised subimages to obtain the denoised MR image. The experimental results show that the proposed method has significantly better performance in terms of mean square error and peak signal-to-noise ratio than each method alone
    corecore