201 research outputs found
Domain adaptation, Explainability & Fairness in AI for Medical Image Analysis: Diagnosis of COVID-19 based on 3-D Chest CT-scans
The paper presents the DEF-AI-MIA COV19D Competition, which is organized in
the framework of the 'Domain adaptation, Explainability, Fairness in AI for
Medical Image Analysis (DEF-AI-MIA)' Workshop of the 2024 Computer Vision and
Pattern Recognition (CVPR) Conference. The Competition is the 4th in the
series, following the first three Competitions held in the framework of ICCV
2021, ECCV 2022 and ICASSP 2023 International Conferences respectively. It
includes two Challenges on: i) Covid-19 Detection and ii) Covid-19 Domain
Adaptation. The Competition use data from COV19-CT-DB database, which is
described in the paper and includes a large number of chest CT scan series.
Each chest CT scan series consists of a sequence of 2-D CT slices, the number
of which is between 50 and 700. Training, validation and test datasets have
been extracted from COV19-CT-DB and provided to the participants in both
Challenges. The paper presents the baseline models used in the Challenges and
the performance which was obtained respectively
COVID-19 Computer-aided Diagnosis through AI-assisted CT Imaging Analysis: Deploying a Medical AI System
Computer-aided diagnosis (CAD) systems stand out as potent aids for
physicians in identifying the novel Coronavirus Disease 2019 (COVID-19) through
medical imaging modalities. In this paper, we showcase the integration and
reliable and fast deployment of a state-of-the-art AI system designed to
automatically analyze CT images, offering infection probability for the swift
detection of COVID-19. The suggested system, comprising both classification and
segmentation components, is anticipated to reduce physicians' detection time
and enhance the overall efficiency of COVID-19 detection. We successfully
surmounted various challenges, such as data discrepancy and anonymisation,
testing the time-effectiveness of the model, and data security, enabling
reliable and scalable deployment of the system on both cloud and edge
environments. Additionally, our AI system assigns a probability of infection to
each 3D CT scan and enhances explainability through anchor set similarity,
facilitating timely confirmation and segregation of infected patients by
physicians.Comment: accepted at IEEE ISBI 202
A Large Imaging Database and Novel Deep Neural Architecture for Covid-19 Diagnosis
Deep learning methodologies constitute nowadays the main approach for medical image analysis and disease prediction. Large annotated databases are necessary for developing these methodologies; such databases are difficult to obtain and to make publicly available for use by researchers and medical experts. In this paper, we focus on diagnosis of Covid-19 based on chest 3-D CT scans and develop a dual knowledge framework, including a large imaging database and a novel deep neural architecture. We introduce COV19-CT-DB, a very large database annotated for COVID-19 that consists of 7,750 3-D CT scans, 1,650 of which refer to COVID-19 cases and 6,100 to non-COVID-19 cases. We use this database to train and develop the RACNet architecture. This architecture performs 3-D analysis based on a CNN-RNN network and handles input CT scans of different lengths, through the introduction of dynamic routing, feature alignment and a mask layer. We conduct a large experimental study that illustrates that the RACNet network has the best performance compared to other deep neural networks i) when trained and tested on COV19-CT-DB; ii) when tested, or when applied, through transfer learning, to other public databases.
Index Terms— medical imaging, COVID-19 diagnosis, COV19-CT-DB database, 3D chest CT scan analysis, RACNet deep neural network, dynamic routing, mask layer, feature alignment
A Deep Neural Architecture for Harmonizing 3-D Input Data Analysis and Decision Making in Medical Imaging
Harmonizing the analysis of data, especially of 3-D image volumes, consisting of different number of slices and annotated per volume, is a significant problem in training and using deep neural networks in various applications, including medical imaging. Moreover, unifying the decision making of the networks over different input datasets is crucial for the generation of rich data-driven knowledge and for trusted usage in the applications. This paper presents a new deep neural architecture, named RACNet, which includes routing and feature alignment steps and effectively handles different input lengths and single annotations of the 3-D image inputs, whilst providing highly accurate decisions. In addition, through latent variable extraction from the trained RACNet, a set of anchors are generated providing further insight on the network's decision making. These can be used to enrich and unify data-driven knowledge extracted from different datasets. An extensive experimental study illustrates the above developments, focusing on COVID-19 diagnosis through analysis of 3-D chest CT scans from databases generated in different countries and medical centers
AI-MIA: COVID-19 Detection and Severity Analysis through Medical Imaging
This paper presents the baseline approach for the organized 2nd Covid-19 Competition, occurring in the framework of the AIMIA Workshop in the European Conference on Computer Vision (ECCV 2022). It presents the COV19-CT-DB database which is annotated for COVID-19 detection,
consisting of about 7,700 3-D CT scans. Part of the database consisting of Covid-19 cases is further annotated in terms of four Covid-19 severity conditions. We have split the database and the latter part of it in training, validation and test datasets. The former two datasets are used for training and validation of machine learning models, while the latter is used for evaluation of the developed models. The baseline approach consists of a deep learning approach, based on a CNN-RNN network and report its performance on the COVID19-CT-DB database. The paper presents the results of both Challenges organised in the framework of the Competition, also compared to the performance of the baseline scheme
FaceRNET: a Facial Expression Intensity Estimation Network
This paper presents our approach for Facial Expression Intensity Estimation
from videos. It includes two components: i) a representation extractor network
that extracts various emotion descriptors (valence-arousal, action units and
basic expressions) from each videoframe; ii) a RNN that captures temporal
information in the data, followed by a mask layer which enables handling
varying input video lengths through dynamic routing. This approach has been
tested on the Hume-Reaction dataset yielding excellent results
Fast quantum state reconstruction via accelerated non-convex programming
We propose a new quantum state reconstruction method that combines ideas from
compressed sensing, non-convex optimization, and acceleration methods. The
algorithm, called Momentum-Inspired Factored Gradient Descent (\texttt{MiFGD}),
extends the applicability of quantum tomography for larger systems. Despite
being a non-convex method, \texttt{MiFGD} converges \emph{provably} to the true
density matrix at a linear rate, in the absence of experimental and statistical
noise, and under common assumptions. With this manuscript, we present the
method, prove its convergence property and provide Frobenius norm bound
guarantees with respect to the true density matrix. From a practical point of
view, we benchmark the algorithm performance with respect to other existing
methods, in both synthetic and real experiments performed on an IBM's quantum
processing unit. We find that the proposed algorithm performs orders of
magnitude faster than state of the art approaches, with the same or better
accuracy. In both synthetic and real experiments, we observed accurate and
robust reconstruction, despite experimental and statistical noise in the
tomographic data. Finally, we provide a ready-to-use code for state tomography
of multi-qubit systems.Comment: 46 page
- …