5,044 research outputs found
Covid-19 classification with deep neural network and belief functions
Computed tomography (CT) image provides useful information for radiologists
to diagnose Covid-19. However, visual analysis of CT scans is time-consuming.
Thus, it is necessary to develop algorithms for automatic Covid-19 detection
from CT images. In this paper, we propose a belief function-based convolutional
neural network with semi-supervised training to detect Covid-19 cases. Our
method first extracts deep features, maps them into belief degree maps and
makes the final classification decision. Our results are more reliable and
explainable than those of traditional deep learning-based classification
models. Experimental results show that our approach is able to achieve a good
performance with an accuracy of 0.81, an F1 of 0.812 and an AUC of 0.875.Comment: medical image, Covid-19, belief function, BIHI conferenc
An automatic COVID-19 CT segmentation network using spatial and channel attention mechanism
The coronavirus disease (COVID-19) pandemic has led to a devastating effect
on the global public health. Computed Tomography (CT) is an effective tool in
the screening of COVID-19. It is of great importance to rapidly and accurately
segment COVID-19 from CT to help diagnostic and patient monitoring. In this
paper, we propose a U-Net based segmentation network using attention mechanism.
As not all the features extracted from the encoders are useful for
segmentation, we propose to incorporate an attention mechanism including a
spatial and a channel attention, to a U-Net architecture to re-weight the
feature representation spatially and channel-wise to capture rich contextual
relationships for better feature representation. In addition, the focal tversky
loss is introduced to deal with small lesion segmentation. The experiment
results, evaluated on a COVID-19 CT segmentation dataset where 473 CT slices
are available, demonstrate the proposed method can achieve an accurate and
rapid segmentation on COVID-19 segmentation. The method takes only 0.29 second
to segment a single CT slice. The obtained Dice Score, Sensitivity and
Specificity are 83.1%, 86.7% and 99.3%, respectively.Comment: 14 pages, 6 figure
Deep Learning Approaches for Data Augmentation in Medical Imaging: A Review
Deep learning has become a popular tool for medical image analysis, but the
limited availability of training data remains a major challenge, particularly
in the medical field where data acquisition can be costly and subject to
privacy regulations. Data augmentation techniques offer a solution by
artificially increasing the number of training samples, but these techniques
often produce limited and unconvincing results. To address this issue, a
growing number of studies have proposed the use of deep generative models to
generate more realistic and diverse data that conform to the true distribution
of the data. In this review, we focus on three types of deep generative models
for medical image augmentation: variational autoencoders, generative
adversarial networks, and diffusion models. We provide an overview of the
current state of the art in each of these models and discuss their potential
for use in different downstream tasks in medical imaging, including
classification, segmentation, and cross-modal translation. We also evaluate the
strengths and limitations of each model and suggest directions for future
research in this field. Our goal is to provide a comprehensive review about the
use of deep generative models for medical image augmentation and to highlight
the potential of these models for improving the performance of deep learning
algorithms in medical image analysis
Evidence fusion with contextual discounting for multi-modality medical image segmentation
As information sources are usually imperfect, it is necessary to take into
account their reliability in multi-source information fusion tasks. In this
paper, we propose a new deep framework allowing us to merge multi-MR image
segmentation results using the formalism of Dempster-Shafer theory while taking
into account the reliability of different modalities relative to different
classes. The framework is composed of an encoder-decoder feature extraction
module, an evidential segmentation module that computes a belief function at
each voxel for each modality, and a multi-modality evidence fusion module,
which assigns a vector of discount rates to each modality evidence and combines
the discounted evidence using Dempster's rule. The whole framework is trained
by minimizing a new loss function based on a discounted Dice index to increase
segmentation accuracy and reliability. The method was evaluated on the BraTs
2021 database of 1251 patients with brain tumors. Quantitative and qualitative
results show that our method outperforms the state of the art, and implements
an effective new idea for merging multi-information within deep neural
networks.Comment: MICCAI202
Brain tumor segmentation with missing modalities via latent multi-source correlation representation
Multimodal MR images can provide complementary information for accurate brain
tumor segmentation. However, it's common to have missing imaging modalities in
clinical practice. Since there exists a strong correlation between multi
modalities, a novel correlation representation block is proposed to specially
discover the latent multi-source correlation. Thanks to the obtained
correlation representation, the segmentation becomes more robust in the case of
missing modalities. The model parameter estimation module first maps the
individual representation produced by each encoder to obtain independent
parameters, then, under these parameters, the correlation expression module
transforms all the individual representations to form a latent multi-source
correlation representation. Finally, the correlation representations across
modalities are fused via the attention mechanism into a shared representation
to emphasize the most important features for segmentation. We evaluate our
model on BraTS 2018 datasets, it outperforms the current state-of-the-art
method and produces robust results when one or more modalities are missing.Comment: 9 pages, 6 figures, accepted by MICCAI 202
A robust agorithm for eye detection on gray intensity face without spectacles
This paper presents a robust eye detection algorithm for gray intensity images. The idea of our method is to combine the respective advantages of two existing techniques, feature based method and template based method, and to overcome their shortcomings. Firstly, after the location of face region is detected, a feature based method will be used to detect two rough regions of both eyes on the face. Then an accurate detection of iris centers will be continued by applying a template based method in these two rough regions. Results of experiments to the faces without spectacles show that the proposed approach is not only robust but also quite efficient.Facultad de Informátic
Distributional Modeling for Location-Aware Adversarial Patches
Adversarial patch is one of the important forms of performing adversarial
attacks in the physical world. To improve the naturalness and aggressiveness of
existing adversarial patches, location-aware patches are proposed, where the
patch's location on the target object is integrated into the optimization
process to perform attacks. Although it is effective, efficiently finding the
optimal location for placing the patches is challenging, especially under the
black-box attack settings. In this paper, we propose the Distribution-Optimized
Adversarial Patch (DOPatch), a novel method that optimizes a multimodal
distribution of adversarial locations instead of individual ones. DOPatch has
several benefits: Firstly, we find that the locations' distributions across
different models are pretty similar, and thus we can achieve efficient
query-based attacks to unseen models using a distributional prior optimized on
a surrogate model. Secondly, DOPatch can generate diverse adversarial samples
by characterizing the distribution of adversarial locations. Thus we can
improve the model's robustness to location-aware patches via carefully designed
Distributional-Modeling Adversarial Training (DOP-DMAT). We evaluate DOPatch on
various face recognition and image recognition tasks and demonstrate its
superiority and efficiency over existing methods. We also conduct extensive
ablation studies and analyses to validate the effectiveness of our method and
provide insights into the distribution of adversarial locations
Dirac-Schr\"odinger equation for quark-antiquark bound states and derivation of its interaction kerne
The four-dimensional Dirac-Schr\"odinger equation satisfied by
quark-antiquark bound states is derived from Quantum Chromodynamics. Different
from the Bethe-Salpeter equation, the equation derived is a kind of first-order
differential equations of Schr\"odinger-type in the position space. Especially,
the interaction kernel in the equation is given by two different closed
expressions. One expression which contains only a few types of Green's
functions is derived with the aid of the equations of motion satisfied by some
kinds of Green's functions. Another expression which is represented in terms of
the quark, antiquark and gluon propagators and some kinds of proper vertices is
derived by means of the technique of irreducible decomposition of Green's
functions. The kernel derived not only can easily be calculated by the
perturbation method, but also provides a suitable basis for nonperturbative
investigations. Furthermore, it is shown that the four-dimensinal
Dirac-Schr\"odinger equation and its kernel can directly be reduced to rigorous
three-dimensional forms in the equal-time Lorentz frame and the
Dirac-Schr\"odinger equation can be reduced to an equivalent
Pauli-Schr\"odinger equation which is represented in the Pauli spinor space. To
show the applicability of the closed expressions derived and to demonstrate the
equivalence between the two different expressions of the kernel, the t-channel
and s-channel one gluon exchange kernels are chosen as an example to show how
they are derived from the closed expressions. In addition, the connection of
the Dirac-Schr\"odinger equation with the Bethe-Salpeter equation is discussed
- …