3,068 research outputs found
Domain Generalization for Medical Image Analysis: A Survey
Medical Image Analysis (MedIA) has become an essential tool in medicine and
healthcare, aiding in disease diagnosis, prognosis, and treatment planning, and
recent successes in deep learning (DL) have made significant contributions to
its advances. However, DL models for MedIA remain challenging to deploy in
real-world situations, failing for generalization under the distributional gap
between training and testing samples, known as a distribution shift problem.
Researchers have dedicated their efforts to developing various DL methods to
adapt and perform robustly on unknown and out-of-distribution data
distributions. This paper comprehensively reviews domain generalization studies
specifically tailored for MedIA. We provide a holistic view of how domain
generalization techniques interact within the broader MedIA system, going
beyond methodologies to consider the operational implications on the entire
MedIA workflow. Specifically, we categorize domain generalization methods into
data-level, feature-level, model-level, and analysis-level methods. We show how
those methods can be used in various stages of the MedIA workflow with DL
equipped from data acquisition to model prediction and analysis. Furthermore,
we include benchmark datasets and applications used to evaluate these
approaches and analyze the strengths and weaknesses of various methods,
unveiling future research opportunities
Towards autonomous diagnostic systems with medical imaging
Democratizing access to high quality healthcare has highlighted the need for autonomous diagnostic systems that a non-expert can use. Remote communities, first responders and even deep space explorers will come to rely on medical imaging systems that will provide them with Point of Care diagnostic capabilities.
This thesis introduces the building blocks that would enable the creation of such a system. Firstly, we present a case study in order to further motivate the need and requirements of autonomous diagnostic systems. This case study primarily concerns deep space exploration where astronauts cannot rely on communication with earth-bound doctors to help them through diagnosis, nor can they make the trip back to earth for treatment. Requirements and possible solutions about the major challenges faced with such an application are discussed.
Moreover, this work describes how a system can explore its perceived environment by developing a Multi Agent Reinforcement Learning method that allows for implicit communication between the agents. Under this regime agents can share the knowledge that benefits them all in achieving their individual tasks. Furthermore, we explore how systems can understand the 3D properties of 2D depicted objects in a probabilistic way.
In Part II, this work explores how to reason about the extracted information in a causally enabled manner. A critical view on the applications of causality in medical imaging, and its potential uses is provided. It is then narrowed down to estimating possible future outcomes and reasoning about counterfactual outcomes by embedding data on a pseudo-Riemannian manifold and constraining the latent space by using the relativistic concept of light cones.
By formalizing an approach to estimating counterfactuals, a computationally lighter alternative to the abduction-action-prediction paradigm is presented through the introduction of Deep Twin Networks. Appropriate partial identifiability constraints for categorical variables are derived and the method is applied in a series of medical tasks involving structured data, images and videos.
All methods are evaluated in a wide array of synthetic and real life tasks that showcase their abilities, often achieving state-of-the-art performance or matching the existing best performance while requiring a fraction of the computational cost.Open Acces
Causality-inspired single-source domain generalization for medical image segmentation
Deep learning models usually suffer from the domain shift issue, where models trained on one source domain do not generalize well to other unseen domains. In this work, we investigate the single-source domain generalization problem: training a deep network that is robust to unseen domains, under the condition that training data are only available from one source domain, which is common in medical imaging applications. We tackle this problem in the context of cross-domain medical image segmentation. In this scenario, domain shifts are mainly caused by different acquisition processes. We propose a simple causality-inspired data augmentation approach to expose a segmentation model to synthesized domain-shifted training examples. Specifically, 1) to make the deep model robust to discrepancies in image intensities and textures, we employ a family of randomly-weighted shallow networks. They augment training images using diverse appearance transformations. 2) Further we show that spurious correlations among objects in an image are detrimental to domain robustness. These correlations might be taken by the network as domain-specific clues for making predictions, and they may break on unseen domains. We remove these spurious correlations via causal intervention. This is achieved by resampling the appearances of potentially correlated objects independently. The proposed approach is validated on three cross-domain segmentation scenarios: cross-modality (CT-MRI) abdominal image segmentation, cross-sequence (bSSFP-LGE) cardiac MRI segmentation, and cross-site prostate MRI segmentation. The proposed approach yields consistent performance gains compared with competitive methods when tested on unseen domains
Artificial Intelligence for In Silico Clinical Trials: A Review
A clinical trial is an essential step in drug development, which is often
costly and time-consuming. In silico trials are clinical trials conducted
digitally through simulation and modeling as an alternative to traditional
clinical trials. AI-enabled in silico trials can increase the case group size
by creating virtual cohorts as controls. In addition, it also enables
automation and optimization of trial design and predicts the trial success
rate. This article systematically reviews papers under three main topics:
clinical simulation, individualized predictive modeling, and computer-aided
trial design. We focus on how machine learning (ML) may be applied in these
applications. In particular, we present the machine learning problem
formulation and available data sources for each task. We end with discussing
the challenges and opportunities of AI for in silico trials in real-world
applications
Deep Semantic Segmentation of Natural and Medical Images: A Review
The semantic image segmentation task consists of classifying each pixel of an
image into an instance, where each instance corresponds to a class. This task
is a part of the concept of scene understanding or better explaining the global
context of an image. In the medical image analysis domain, image segmentation
can be used for image-guided interventions, radiotherapy, or improved
radiological diagnostics. In this review, we categorize the leading deep
learning-based medical and non-medical image segmentation solutions into six
main groups of deep architectural, data synthesis-based, loss function-based,
sequenced models, weakly supervised, and multi-task methods and provide a
comprehensive review of the contributions in each of these groups. Further, for
each group, we analyze each variant of these groups and discuss the limitations
of the current approaches and present potential future research directions for
semantic image segmentation.Comment: 45 pages, 16 figures. Accepted for publication in Springer Artificial
Intelligence Revie
A Survey on the Current Status and Future Challenges Towards Objective Skills Assessment in Endovascular Surgery
Minimally-invasive endovascular interventions have evolved rapidly over the past decade, facilitated by breakthroughs in medical
imaging and sensing, instrumentation and most recently robotics. Catheter based operations are potentially safer and applicable to
a wider patient population due to the reduced comorbidity. As a result endovascular surgery has become the preferred treatment
option for conditions previously treated with open surgery and as such the number of patients undergoing endovascular interventions
is increasing every year. This fact coupled with a proclivity for reduced working hours, results in a requirement for efficient training
and assessment of new surgeons, that deviates from the “see one, do one, teach one” model introduced by William Halsted, so
that trainees obtain operational expertise in a shorter period. Developing more objective assessment tools based on quantitative
metrics is now a recognised need in interventional training and this manuscript reports the current literature for endovascular skills
assessment and the associated emerging technologies. A systematic search was performed on PubMed (MEDLINE), Google Scholar,
IEEXplore and known journals using the keywords, “endovascular surgery”, “surgical skills”, “endovascular skills”, “surgical training
endovascular” and “catheter skills”. Focusing explicitly on endovascular surgical skills, we group related works into three categories
based on the metrics used; structured scales and checklists, simulation-based and motion-based metrics. This review highlights the
key findings in each category and also provides suggestions for new research opportunities towards fully objective and automated
surgical assessment solutions
DEEP LEARNING METHODS FOR MULTI-MODAL HEALTHCARE DATA
Abstract:
Today, enormous transformations are happening in health care research and applications. In the past few years, there has been exponential growth in the amount of healthcare data generated from multiple sources. This growth in data has led to many new possibilities and opportunities for researchers to build different models and analytics for improving healthcare for patients. While there has been an increase in research and successful application of prediction and classification tasks, there are many other challenges in improving overall healthcare. Some of these challenges include optimizing physician performance, reducing healthcare costs, and discovering new treatments for diseases.
- Often, doctors have to perform many time-consuming tasks, which leads to fatigue and misdiagnosis. Many of these tasks could be automated to save time and release doctors from menial tasks enabling them to spend more time improving the quality of care.
- Health dataset contains multiple modalities such as structured sequence, unstructured text, images, ECG, and EEG signals. Successful application of machine learning requires methods to utilize these diverse data sources.
- Finally, current healthcare is limited by the treatments available on the market. Often, many treatments do not make it beyond clinical trials, which leads to a lot of lost opportunities. It is possible to improve the outcome of clinical trials and ultimately improve the quality of treatment for the patients with machine learning models for different clinical trial-related tasks.
In this dissertation, we address these challenges by
- Predictive Models: Building deep learning models for sleep clinics to save time and effort needed by doctors for sleep staging, apnea, limb movement detection
- Generative Models: Developing multimodal deep learning systems that can produce text reports and augment doctors in clinical practice.
- Interpretable Representation Models: Applying multimodal models to help in clinical trial recruitment and counterfactual explanations for clinical trial outcome predictions to improve clinical trial success.Ph.D
- …