1,404 research outputs found
Fighting the scanner effect in brain MRI segmentation with a progressive level-of-detail network trained on multi-site data
Many clinical and research studies of the human brain require accurate structural MRI segmentation. While traditional atlas-based methods can be applied to volumes from any acquisition site, recent deep learning algorithms ensure high accuracy only when tested on data from the same sites exploited in training (i.e., internal data). Performance degradation experienced on external data (i.e., unseen volumes from unseen sites) is due to the inter-site variability in intensity distributions, and to unique artefacts caused by different MR scanner models and acquisition parameters. To mitigate this site-dependency, often referred to as the scanner effect, we propose LOD-Brain, a 3D convolutional neural network with progressive levels-of-detail (LOD), able to segment brain data from any site. Coarser network levels are responsible for learning a robust anatomical prior helpful in identifying brain structures and their locations, while finer levels refine the model to handle site-specific intensity distributions and anatomical variations. We ensure robustness across sites by training the model on an unprecedentedly rich dataset aggregating data from open repositories: almost 27,000 T1w volumes from around 160 acquisition sites, at 1.5 - 3T, from a population spanning from 8 to 90 years old. Extensive tests demonstrate that LOD-Brain produces state-of-the-art results, with no significant difference in performance between internal and external sites, and robust to challenging anatomical variations. Its portability paves the way for large-scale applications across different healthcare institutions, patient populations, and imaging technology manufacturers. Code, model, and demo are available on the project website
Towards Integration of Artificial Intelligence into Medical Devices as a Real-Time Recommender System for Personalised Healthcare:State-of-the-Art and Future Prospects
In the era of big data, artificial intelligence (AI) algorithms have the potential to revolutionize healthcare by improving patient outcomes and reducing healthcare costs. AI algorithms have frequently been used in health care for predictive modelling, image analysis and drug discovery. Moreover, as a recommender system, these algorithms have shown promising impacts on personalized healthcare provision. A recommender system learns the behaviour of the user and predicts their current preferences (recommends) based on their previous preferences. Implementing AI as a recommender system improves this prediction accuracy and solves cold start and data sparsity problems. However, most of the methods and algorithms are tested in a simulated setting which cannot recapitulate the influencing factors of the real world. This review article systematically reviews prevailing methodologies in recommender systems and discusses the AI algorithms as recommender systems specifically in the field of healthcare. It also provides discussion around the most cutting-edge academic and practical contributions present in the literature, identifies performance evaluation matrices, challenges in the implementation of AI as a recommender system, and acceptance of AI-based recommender systems by clinicians. The findings of this article direct researchers and professionals to comprehend currently developed recommender systems and the future of medical devices integrated with real-time recommender systems for personalized healthcare
Recommended from our members
Cancer Care in Pandemic Times: Building Inclusive Local Health Security in Africa and India
This is a book about improving cancer care in Africa and India that is a child of its pandemic times. It has been collaboratively researched and written by colleagues in Kenya, Tanzania, India and the UK, working within a cross-country, multidisciplinary research project, Innovation for Cancer Care in Africa (ICCA). Since this was a health-focused research project, ICCA researchers during the pandemic not only continued to work on the cancer research project but were also called upon by their governments to respond to immediate pandemic needs. In combining these two concerns, for improving cancer care and responding to pandemic needs, our original project aims have been challenged, deepened and reworked. ICCAâs initial collaborative research focus includedâagainst the grain of most global health literatureâthe potential role of enhanced local production of essential healthcare supplies for improving cancer care in African countries. The pandemic experience has strikingly validated these earlier findings on the importance of industrial development for health care. The pandemic crystallised for researchers and policymakers an often overlooked phenomenon: global health security is built on the foundations of strong local health security. We argue in this book that new analytical thinking from social scientists and others is required on how to build local health security. We use the âlensâ of original research on cancer care in East Africa and India to build up an understanding of the scope for the development of stronger synergies between local health industries and health care, in order to strengthen local health security and develop tools for policy making. The rethinking and reimagining presented here is required for different African countries, for India and the wider world, and this research on cancer care has taught us that this imperative goes much wider than infectious diseases
Recommended from our members
Self-supervised multicontrast super-resolution for diffusion-weighted prostate MRI
Purpose: This study addresses the challenge of low resolution and signal-to-noise ratio (SNR) in diffusion-weighted images (DWI), which are pivotal for cancer detection. Traditional methods increase SNR at high b-values through multiple acquisitions, but this results in diminished image resolution due to motion-induced variations. Our research aims to enhance spatial resolution by exploiting the global structure within multicontrast DWI scans and millimetric motion between acquisitions. Methods: We introduce a novel approach employing a "Perturbation Network" to learn subvoxel-size motions between scans, trained jointly with an implicit neural representation (INR) network. INR encodes the DWI as a continuous volumetric function, treating voxel intensities of low-resolution acquisitions as discrete samples. By evaluating this function with a finer grid, our model predicts higher-resolution signal intensities for intermediate voxel locations. The Perturbation Network's motion-correction efficacy was validated through experiments on biological phantoms and in vivo prostate scans. Results: Quantitative analyses revealed significantly higher structural similarity measures of super-resolution images to ground truth high-resolution images compared to high-order interpolation (p Conclusion: High-resolution details in DWI can be obtained without the need for high-resolution training data. One notable advantage of the proposed method is that it does not require a super-resolution training set. This is important in clinical practice because the proposed method can easily be adapted to images with different scanner settings or body parts, whereas the supervised methods do not offer such an option.</p
A Comprehensive Survey on Applications of Transformers for Deep Learning Tasks
Transformer is a deep neural network that employs a self-attention mechanism
to comprehend the contextual relationships within sequential data. Unlike
conventional neural networks or updated versions of Recurrent Neural Networks
(RNNs) such as Long Short-Term Memory (LSTM), transformer models excel in
handling long dependencies between input sequence elements and enable parallel
processing. As a result, transformer-based models have attracted substantial
interest among researchers in the field of artificial intelligence. This can be
attributed to their immense potential and remarkable achievements, not only in
Natural Language Processing (NLP) tasks but also in a wide range of domains,
including computer vision, audio and speech processing, healthcare, and the
Internet of Things (IoT). Although several survey papers have been published
highlighting the transformer's contributions in specific fields, architectural
differences, or performance evaluations, there is still a significant absence
of a comprehensive survey paper encompassing its major applications across
various domains. Therefore, we undertook the task of filling this gap by
conducting an extensive survey of proposed transformer models from 2017 to
2022. Our survey encompasses the identification of the top five application
domains for transformer-based models, namely: NLP, Computer Vision,
Multi-Modality, Audio and Speech Processing, and Signal Processing. We analyze
the impact of highly influential transformer-based models in these domains and
subsequently classify them based on their respective tasks using a proposed
taxonomy. Our aim is to shed light on the existing potential and future
possibilities of transformers for enthusiastic researchers, thus contributing
to the broader understanding of this groundbreaking technology
A comparison of Generative Adversarial Networks for automated prostate cancer detection on T2-weighted MRI
Generative Adversarial Networks (GANs) have shown potential in medical imaging. In this study, several previously developed GANs were investigated for prostate cancer (PCa) detection on T2-weighted (T2W) magnetic resonance images (MRI).T2W MRI from an in-house collected dataset (N=961) were used to train, validate, and test an automated computer-aided detection (CAD) pipeline. The open-access PROSTATEx training dataset (N=199) was used as an external test set. The CAD pipeline consisted of normalization, prostate segmentation, quality control, prostate gland cropping, and a GAN model. Six GANs (f-AnoGAN, HealthyGAN, StarGAN, StarGAN-v2, Fixed-Point-GAN and DeScarGAN) were evaluated for PCa detection on the patient-level using the area under the receiver operating characteristic curve (AUC). The best performing GAN (validation set) was trained with five different initializations and evaluated on the internal and external test sets to assess its robustness.Fixed-Point-GAN performed best (validation, AUC 0.76) and was selected for further assessment. The highest performance on the internal and external test sets were an AUC of 0.73 (95% CI: 0.68-0.79) and 0.77 (95% CI: 0.70-0.83), respectively. The average AUCs ± standard deviation across all runs corresponded to 0.71 ± 0.01 and 0.71 ± 0.04, respectively.Fixed-Point-GAN was identified as a promising GAN for the detection of PCa on T2W MRI. This model needs to be further investigated and trained on a larger dataset of multiparametric or biparametric MR images to assess its full potential as a support tool for radiologists
Auditable and performant Byzantine consensus for permissioned ledgers
Permissioned ledgers allow users to execute transactions against a data store, and retain proof of their execution in a replicated ledger. Each replica verifies the transactionsâ execution and ensures that, in perpetuity, a committed transaction cannot be removed from the ledger. Unfortunately, this is not guaranteed by todayâs permissioned ledgers, which can be re-written if an arbitrary number of replicas collude. In addition, the transaction throughput of permissioned ledgers is low, hampering real-world deployments, by not taking advantage of multi-core CPUs and hardware accelerators.
This thesis explores how permissioned ledgers and their consensus protocols can be made auditable in perpetuity; even when all replicas collude and re-write the ledger. It also addresses how Byzantine consensus protocols can be changed to increase the execution throughput of complex transactions. This thesis makes the following contributions:
1. Always auditable Byzantine consensus protocols. We present a permissioned ledger system that can assign blame to individual replicas regardless of how many of them misbehave. This is achieved by signing and storing consensus protocol messages in the ledger and providing clients with signed, universally-verifiable receipts.
2. Performant transaction execution with hardware accelerators. Next, we describe a cloud-based ML inference service that provides strong integrity guarantees, while staying compatible with current inference APIs. We change the Byzantine consensus protocol to execute machine learning (ML) inference computation on GPUs to optimize throughput and latency of ML inference computation.
3. Parallel transactions execution on multi-core CPUs. Finally, we introduce a permissioned ledger that executes transactions, in parallel, on multi-core CPUs. We separate the execution of transactions between the primary and secondary replicas. The primary replica executes transactions on multiple CPU cores and creates a dependency graph of the transactions that the backup replicas utilize to execute transactions in parallel.Open Acces
Automated Distinct Bone Segmentation from Computed Tomography Images using Deep Learning
Large-scale CT scans are frequently performed for forensic and diagnostic purposes, to plan and
direct surgical procedures, and to track the development of bone-related diseases. This often
involves radiologists who have to annotate bones manually or in a semi-automatic way, which is
a time consuming task. Their annotation workload can be reduced by automated segmentation
and detection of individual bones. This automation of distinct bone segmentation not only has
the potential to accelerate current workflows but also opens up new possibilities for processing
and presenting medical data for planning, navigation, and education.
In this thesis, we explored the use of deep learning for automating the segmentation of all
individual bones within an upper-body CT scan. To do so, we had to find a network architec-
ture that provides a good trade-off between the problemâs high computational demands and the
resultsâ accuracy. After finding a baseline method and having enlarged the dataset, we set out
to eliminate the most prevalent types of error. To do so, we introduced an novel method called
binary-prediction-enhanced multi-class (BEM) inference, separating the task into two: Distin-
guishing bone from non-bone is conducted separately from identifying the individual bones.
Both predictions are then merged, which leads to superior results. Another type of error is tack-
led by our developed architecture, the Sneaky-Net, which receives additional inputs with larger
fields of view but at a smaller resolution. We can thus sneak more extensive areas of the input
into the network while keeping the growth of additional pixels in check.
Overall, we present a deep-learning-based method that reliably segments most of the over
one hundred distinct bones present in upper-body CT scans in an end-to-end trained matter
quickly enough to be used in interactive software. Our algorithm has been included in our
groups virtual reality medical image visualisation software SpectoVR with the plan to be used
as one of the puzzle piece in surgical planning and navigation, as well as in the education of
future doctors
Exploring contrast generalisation in deep learning-based brain MRI-to-CT synthesis
Background: Synthetic computed tomography (sCT) has been proposed and
increasingly clinically adopted to enable magnetic resonance imaging
(MRI)-based radiotherapy. Deep learning (DL) has recently demonstrated the
ability to generate accurate sCT from fixed MRI acquisitions. However, MRI
protocols may change over time or differ between centres resulting in
low-quality sCT due to poor model generalisation. Purpose: investigating domain
randomisation (DR) to increase the generalisation of a DL model for brain sCT
generation. Methods: CT and corresponding T1-weighted MRI with/without
contrast, T2-weighted, and FLAIR MRI from 95 patients undergoing RT were
collected, considering FLAIR the unseen sequence where to investigate
generalisation. A ``Baseline'' generative adversarial network was trained
with/without the FLAIR sequence to test how a model performs without DR. Image
similarity and accuracy of sCT-based dose plans were assessed against CT to
select the best-performing DR approach against the Baseline. Results: The
Baseline model had the poorest performance on FLAIR, with mean absolute error
(MAE)=10620.7 HU (mean). Performance on FLAIR significantly
improved for the DR model with MAE=99.014.9 HU, but still inferior to the
performance of the Baseline+FLAIR model (MAE=72.610.1 HU). Similarly, an
improvement in -pass rate was obtained for DR vs Baseline. Conclusions:
DR improved image similarity and dose accuracy on the unseen sequence compared
to training only on acquired MRI. DR makes the model more robust, reducing the
need for re-training when applying a model on sequences unseen and unavailable
for retraining.Comment: Preprint submitted to Physica Medica on 2023-02-16 for review. Also
published in Zenodo at https://doi.org/10.5281/zenodo.774264
- âŠ