20 research outputs found
Modelling and simulation of cardiac shape and function in healthy humans and their industrial, clinical and computational applications
Advancing GABA-edited MRS Research through a Reconstruction Challenge
Purpose To create a benchmark for the comparison of machine learning-based Gamma-Aminobutyric Acid (GABA)-edited Magnetic Resonance Spectroscopy (MRS) reconstruction models using one quarter of the transients typically acquired during a complete scan.Methods The Edited-MRS reconstruction challenge had three tracks with the purpose of evaluating machine learning models trained to reconstruct simulated (Track 1), homogeneous in vivo (Track 2), and heterogeneous in vivo (Track 3) GABA-edited MRS data. Four quantitative metrics were used to evaluate the results: mean squared error (MSE), signal-to-noise ratio (SNR), linewidth, and a shape score metric that we proposed. Challenge participants were given three months to create, train and submit their models. Challenge organizers provided open access to a baseline U-NET model for initial comparison, as well as simulated data, in vivo data, and tutorials and guides for adding synthetic noise to the simulations.Results The most successful approach for Track 1 simulated data was a covariance matrix convolutional neural network model, while for Track 2 and Track 3 in vivo data, a vision transformer model operating on a spectrogram representation of the data achieved the most success. Deep learning (DL) based reconstructions with reduced transients achieved equivalent or better SNR, linewidth and fit error as conventional reconstructions with the full amount of transients. However, some DL models also showed the ability to optimize the linewidth and SNR values without actually improving overall spectral quality, pointing to the need for more robust metrics.Conclusion The edited-MRS reconstruction challenge showed that the top performing DL based edited-MRS reconstruction pipelines can obtain with a reduced number of transients equivalent metrics to conventional reconstruction pipelines using the full amount of transients. The proposed metric shape score was positively correlated with challenge track outcome indicating that it is well-suited to evaluate spectral quality.Competing Interest StatementThe authors have declared no competing interest
Results of the 2023 ISBI challenge to reduce GABA-edited MRS acquisition time
PurposeUse a conference challenge format to compare machine learning-based gamma-aminobutyric acid (GABA)-edited magnetic resonance spectroscopy (MRS) reconstruction models using one-quarter of the transients typically acquired during a complete scan.MethodsThere were three tracks: Track 1: simulated data, Track 2: identical acquisition parameters with in vivo data, and Track 3: different acquisition parameters with in vivo data. The mean squared error, signal-to-noise ratio, linewidth, and a proposed shape score metric were used to quantify model performance. Challenge organizers provided open access to a baseline model, simulated noise-free data, guides for adding synthetic noise, and in vivo data.ResultsThree submissions were compared. A covariance matrix convolutional neural network model was most successful for Track 1. A vision transformer model operating on a spectrogram data representation was most successful for Tracks 2 and 3. Deep learning (DL) reconstructions with 80 transients achieved equivalent or better SNR, linewidth and fit error compared to conventional 320 transient reconstructions. However, some DL models optimized linewidth and SNR without actually improving overall spectral quality, indicating a need for more robust metrics.ConclusionDL-based reconstruction pipelines have the promise to reduce the number of transients required for GABA-edited MRS
Cardiac Simulations: Computer Games to Mend Broken Hearts
The heart is the organ in charge of pumping blood to the rest of the body. But the heart can get sick, and we want to know how to best mend it. Sometimes, doctors can give medicines or do heart surgery to treat heart problems. But some medicines may not work for everyone, and surgeries can be very difficult. Now we have special computer games to better guide these decisions: cardiac simulations. Cardiac simulations are like having a twin of the heart in the computer. Doctors can try different medicines in the simulation without putting the patient at risk. They can also use the heart simulation to practice for surgery. They can train using the simulation until they are ready, and then do the surgery in the real world. With the help of computer simulations, we can find the best way to help sick hearts.</jats:p
Generating Synthetic Labeled Data from Existing Anatomical Models:An Example with Echocardiography Segmentation
Deep learning can bring time savings and increased reproducibility to medical image analysis. However, acquiring training data is challenging due to the time-intensive nature of labeling and high inter-observer variability in annotations. Rather than labeling images, in this work we propose an alternative pipeline where images are generated from existing high-quality annotations using generative adversarial networks (GANs). Annotations are derived automatically from previously built anatomical models and are transformed into realistic synthetic ultrasound images with paired labels using a CycleGAN. We demonstrate the pipeline by generating synthetic 2D echocardiography images to compare with existing deep learning ultrasound segmentation datasets. A convolutional neural network is trained to segment the left ventricle and left atrium using only synthetic images. Networks trained with synthetic images were extensively tested on four different unseen datasets of real images with median Dice scores of 91, 90, 88, and 87 for left ventricle segmentation. These results match or are better than inter-observer results measured on real ultrasound datasets and are comparable to a network trained on a separate set of real images. Results demonstrate the images produced can effectively be used in place of real data for training. The proposed pipeline opens the door for automatic generation of training data for many tasks in medical imaging as the same process can be applied to other segmentation or landmark detection tasks in any modality. The source code and anatomical models are available to other researchers.11https://adgilbert.github.io/data-generation/</p
Generating Synthetic Labeled Data From Existing Anatomical Models: An Example With Echocardiography Segmentation
SciBlend: Advanced data visualization workflows within Blender
Scientific data visualization is essential for analysis, communication, and storytelling in research. While Blender offers a powerful rendering engine and a flexible 3D environment, its steep learning curve and general-purpose interface can hinder scientific workflows. To address this gap, we present SciBlend, a Python-based toolkit that extends Blender for data visualization. It provides specialized add-ons for multiple computational data files import, annotation, shading, and scene composition, enabling both photorealistic (Cycles) and real-time (EEVEE) rendering of large-scale and time-varying data. By combining a streamlined workflow with physically based rendering, SciBlend supports advanced visualization tasks while preserving essential scientific attributes. Comparative evaluations across multiple case studies show improvements in rendering performance, clarity, and reproducibility relative to traditional tools. This modular and user-oriented design offers a robust solution for creating publication-ready visuals of complex computational data
Optimization of anti-tachycardia pacing efficacy through scar-specific delivery and minimization of re-initiation: a virtual study on a cohort of infarcted porcine hearts
Abstract
Aims
Anti-tachycardia pacing (ATP) is a reliable electrotherapy to painlessly terminate ventricular tachycardia (VT). However, ATP is often ineffective, particularly for fast VTs. The efficacy may be enhanced by optimized delivery closer to the re-entrant circuit driving the VT. This study aims to compare ATP efficacy for different delivery locations with respect to the re-entrant circuit, and further optimize ATP by minimizing failure through re-initiation.
Methods and results
Seventy-three sustained VTs were induced in a cohort of seven infarcted porcine ventricular computational models, largely dominated by a single re-entrant pathway. The efficacy of burst ATP delivered from three locations proximal to the re-entrant circuit (septum) and three distal locations (lateral/posterior left ventricle) was compared. Re-initiation episodes were used to develop an algorithm utilizing correlations between successive sensed electrogram morphologies to automatically truncate ATP pulse delivery. Anti-tachycardia pacing was more efficacious at terminating slow compared with fast VTs (65 vs. 46%, P = 0.000039). A separate analysis of slow VTs showed that the efficacy was significantly higher when delivered from distal compared with proximal locations (distal 72%, proximal 59%), being reversed for fast VTs (distal 41%, proximal 51%). Application of our early termination detection algorithm (ETDA) accurately detected VT termination in 79% of re-initiated cases, improving the overall efficacy for proximal delivery with delivery inside the critical isthmus (CI) itself being overall most effective.
Conclusion
Anti-tachycardia pacing delivery proximal to the re-entrant circuit is more effective at terminating fast VTs, but less so slow VTs, due to frequent re-initiation. Attenuating re-initiation, through ETDA, increases the efficacy of delivery within the CI for all VTs.
</jats:sec
A systematic review of cardiac in-silico clinical trials
Computational models of the heart are now being used to assess the effectiveness and feasibility of interventions through in-silico clinical trials (ISCTs). As the adoption and acceptance of ISCTs increases, best practices for reporting the methodology and analysing the results will emerge. Focusing in the area of cardiology, we aim to evaluate the types of ISCTs, their analysis methods and their reporting standards. To this end, we conducted a systematic review of cardiac ISCTs over the period of 1 January 2012–1 January 2022, following the preferred reporting items for systematic reviews and meta-analysis (PRISMA). We considered cardiac ISCTs of human patient cohorts, and excluded studies of single individuals and those in which models were used to guide a procedure without comparing against a control group. We identified 36 publications that described cardiac ISCTs, with most of the studies coming from the US and the UK. In of the studies, a validation step was performed, although the specific type of validation varied between the studies. ANSYS FLUENT was the most commonly used software in of ISCTs. The specific software used was not reported in of the studies. Unlike clinical trials, we found a lack of consistent reporting of patient demographics, with of the studies not reporting them. Uncertainty quantification was limited, with sensitivity analysis performed in only of the studies. In of the ISCTs, no link was provided to provide easy access to the data or models used in the study. There was no consistent naming of study types with a wide range of studies that could potentially be considered ISCTs. There is a clear need for community agreement on minimal reporting standards on patient demographics, accepted standards for ISCT cohort quality control, uncertainty quantification, and increased model and data sharing
