6 research outputs found
Detecting and Mitigating Adversarial Attack
Automating arrhythmia detection from ECG requires a robust and trusted system that retains high accuracy under electrical disturbances. Deep neural networks have become a popular technique for tracing ECG signals, outperforming human experts. Many approaches have reached human-level performance in classifying arrhythmia from ECGs. Even convolutional neural networks are susceptible to adversarial examples as well that can also misclassify ECG signals. Moreover, they do not generalize well on the out-of-distribution dataset. Adversarial attacks are small crafted perturbations injected in the original data which manifest the out-of-distribution shifts in signal to misclassify the correct class. However, these architectures are vulnerable to adversarial attacks as well. The GAN architecture has been employed in recent works to synthesize adversarial ECG signals to increase existing training data. However, they use a disjointed CNN-based classification architecture to detect arrhythmia. Till now, no versatile architecture has been proposed that can detect adversarial examples and classify arrhythmia simultaneously. In this work, we propose two novel conditional generative adversarial networks (GAN), ECG-Adv-GAN and ECG-ATK-GAN, to simultaneously generate ECG signals for different categories and detect cardiac abnormalities. The model is conditioned on class-specific ECG signals to synthesize realistic adversarial examples. Moreover, the ECG-ATK-GAN is robust against adversarial attacked ECG signals and retains high accuracy when exposed to various types of adversarial attacks while classifying arrhythmia. We benchmark our architecture on six different white and black-box attacks and compare them with other recently proposed arrhythmia classification models. When considering the defense strategy, the variation of the adversarial attacks, both targeted and non-targeted, can determine the perturbation by calculating the gradient. Novel defenses are being introduced to improve upon existing techniques to fend off each new attack. This back-and-forth game between attack and defense is persistently recurring, and it became significant to understand the pattern and behavior of the attacker to create a robust defense. One widespread tactic is applying a mathematically based model like Game theory. To analyze this circumstance, we propose a computational framework of game theory to analyze the CNN Classifier's vulnerability, strategy, and outcomes by forming a simultaneous two-player game. We represent the interaction in the Stackelberg Game in Kuhn tree to study players' possible behaviors and actions by applying our Classifier's actual predicted values in CAPTCHA dataset. Thus, we interpret potential attacks in deep learning applications while representing viable defense strategies from the Game theoretical perspective
Revolutionizing Space Health (Swin-FSR): Advancing Super-Resolution of Fundus Images for SANS Visual Assessment Technology
The rapid accessibility of portable and affordable retinal imaging devices
has made early differential diagnosis easier. For example, color funduscopy
imaging is readily available in remote villages, which can help to identify
diseases like age-related macular degeneration (AMD), glaucoma, or pathological
myopia (PM). On the other hand, astronauts at the International Space Station
utilize this camera for identifying spaceflight-associated neuro-ocular
syndrome (SANS). However, due to the unavailability of experts in these
locations, the data has to be transferred to an urban healthcare facility (AMD
and glaucoma) or a terrestrial station (e.g, SANS) for more precise disease
identification. Moreover, due to low bandwidth limits, the imaging data has to
be compressed for transfer between these two places. Different super-resolution
algorithms have been proposed throughout the years to address this.
Furthermore, with the advent of deep learning, the field has advanced so much
that x2 and x4 compressed images can be decompressed to their original form
without losing spatial information. In this paper, we introduce a novel model
called Swin-FSR that utilizes Swin Transformer with spatial and depth-wise
attention for fundus image super-resolution. Our architecture achieves Peak
signal-to-noise-ratio (PSNR) of 47.89, 49.00 and 45.32 on three public
datasets, namely iChallenge-AMD, iChallenge-PM, and G1020. Additionally, we
tested the model's effectiveness on a privately held dataset for SANS provided
by NASA and achieved comparable results against previous architectures.Comment: Accepted in 26th International Conference on Medical Image Computing
and Computer Assisted Intervention, MICCAI 202
Recommended from our members
SANS-CNN: An automated machine learning technique for spaceflight associated neuro-ocular syndrome with astronaut imaging data.
Spaceflight associated neuro-ocular syndrome (SANS) is one of the largest physiologic barriers to spaceflight and requires evaluation and mitigation for future planetary missions. As the spaceflight environment is a clinically limited environment, the purpose of this research is to provide automated, early detection and prognosis of SANS with a machine learning model trained and validated on astronaut SANS optical coherence tomography (OCT) images. In this study, we present a lightweight convolutional neural network (CNN) incorporating an EfficientNet encoder for detecting SANS from OCT images titled "SANS-CNN." We used 6303 OCT B-scan images for training/validation (80%/20% split) and 945 for testing with a combination of terrestrial images and astronaut SANS images for both testing and validation. SANS-CNN was validated with SANS images labeled by NASA to evaluate accuracy, specificity, and sensitivity. To evaluate real-world outcomes, two state-of-the-art pre-trained architectures were also employed on this dataset. We use GRAD-CAM to visualize activation maps of intermediate layers to test the interpretability of SANS-CNN's prediction. SANS-CNN achieved 84.2% accuracy on the test set with an 85.6% specificity, 82.8% sensitivity, and 84.1% F1-score. Moreover, SANS-CNN outperforms two other state-of-the-art pre-trained architectures, ResNet50-v2 and MobileNet-v2, in accuracy by 21.4% and 13.1%, respectively. We also apply two class-activation map techniques to visualize critical SANS features perceived by the model. SANS-CNN represents a CNN model trained and validated with real astronaut OCT images, enabling fast and efficient prediction of SANS-like conditions for spaceflight missions beyond Earth's orbit in which clinical and computational resources are extremely limited