43 research outputs found
Covid-19: predictive mathematical formulae for the number of deaths during lockdown and possible scenarios for the post-lockdown period.
In a recent article, we introduced two novel mathematical expressions and a deep learning algorithm for characterizing the dynamics of the number of reported infected cases with SARS-CoV-2. Here, we show that such formulae can also be used for determining the time evolution of the associated number of deaths: for the epidemics in Spain, Germany, Italy and the UK, the parameters defining these formulae were computed using data up to 1 May 2020, a period of lockdown for these countries; then, the predictions of the formulae were compared with the data for the following 122 days, namely until 1 September. These comparisons, in addition to demonstrating the remarkable predictive capacity of our simple formulae, also show that for a rather long time the easing of the lockdown measures did not affect the number of deaths. The importance of these results regarding predictions of the number of Covid-19 deaths during the post-lockdown period is discussed
Recommended from our members
Covid-19: predictive mathematical formulae for the number of deaths during lockdown and possible scenarios for the post-lockdown period.
In a recent article, we introduced two novel mathematical expressions and a deep learning algorithm for characterizing the dynamics of the number of reported infected cases with SARS-CoV-2. Here, we show that such formulae can also be used for determining the time evolution of the associated number of deaths: for the epidemics in Spain, Germany, Italy and the UK, the parameters defining these formulae were computed using data up to 1 May 2020, a period of lockdown for these countries; then, the predictions of the formulae were compared with the data for the following 122 days, namely until 1 September. These comparisons, in addition to demonstrating the remarkable predictive capacity of our simple formulae, also show that for a rather long time the easing of the lockdown measures did not affect the number of deaths. The importance of these results regarding predictions of the number of Covid-19 deaths during the post-lockdown period is discussed
Is attention all you need in medical image analysis? A review
Medical imaging is a key component in clinical diagnosis, treatment planning
and clinical trial design, accounting for almost 90% of all healthcare data.
CNNs achieved performance gains in medical image analysis (MIA) over the last
years. CNNs can efficiently model local pixel interactions and be trained on
small-scale MI data. The main disadvantage of typical CNN models is that they
ignore global pixel relationships within images, which limits their
generalisation ability to understand out-of-distribution data with different
'global' information. The recent progress of Artificial Intelligence gave rise
to Transformers, which can learn global relationships from data. However, full
Transformer models need to be trained on large-scale data and involve
tremendous computational complexity. Attention and Transformer compartments
(Transf/Attention) which can well maintain properties for modelling global
relationships, have been proposed as lighter alternatives of full Transformers.
Recently, there is an increasing trend to co-pollinate complementary
local-global properties from CNN and Transf/Attention architectures, which led
to a new era of hybrid models. The past years have witnessed substantial growth
in hybrid CNN-Transf/Attention models across diverse MIA problems. In this
systematic review, we survey existing hybrid CNN-Transf/Attention models,
review and unravel key architectural designs, analyse breakthroughs, and
evaluate current and future opportunities as well as challenges. We also
introduced a comprehensive analysis framework on generalisation opportunities
of scientific and clinical impact, based on which new data-driven domain
generalisation and adaptation methods can be stimulated
Statistical limitations in ion imaging
In this study, we investigated the capacity of various ion beams available for radiotherapy to produce high quality relative stopping power map acquired from energy-loss measurements. The image quality metrics chosen to compare the different ions were signal-to-noise ratio (SNR) as a function of dose and spatial resolution. Geant4 Monte Carlo simulations were performed for: hydrogen, helium, lithium, boron and carbon ion beams crossing a 20 cm diameter water phantom to determine SNR and spatial resolution. It has been found that protons possess a significantly larger SNR when compared with other ions at a fixed range (up to 36% higher than helium) due to the proton nuclear stability and low dose per primary. However, it also yields the lowest spatial resolution against all other ions, with a resolution lowered by a factor 4 compared to that of carbon imaging, for a beam with the same initial range. When comparing for a fixed spatial resolution of 10 lp cmâ1, carbon ions produce the highest image quality metrics with proton ions producing the lowest. In conclusion, it has been found that no ion can maximize all image quality metrics simultaneously and that a choice must be made between spatial resolution, SNR, and dose
Machine learning for proton path tracking in proton computed tomography
A Machine Learning approach to the problem of calculating the proton paths inside a scanned object in proton Computed Tomography is presented. The method is developed in order to mitigate the loss in both spatial resolution and quantitative integrity of the reconstructed images caused by multiple Coulomb scattering of protons traversing the matter. Two Machine Learning models were used: a forward neural network (NN) and the XGBoost method. A heuristic approach, based on track averaging was also implemented in order to evaluate the accuracy limits on track calculation, imposed by the statistical nature of the scattering. Synthetic data from anthropomorphic voxelized phantoms, generated by the Monte Carlo (MC) Geant4 code, were utilized to train the models and evaluate their accuracy, in comparison to a widely used analytical method that is based on likelihood maximization and FermiâEyges scattering model. Both NN and XGBoost model were found to perform very close or at the accuracy limit, further improving the accuracy of the analytical method (by 12% in the typical case of 200 MeV protons on 20 cm of water object), especially for protons scattered at large angles. Inclusion of the material information along the path in terms of radiation length did not show improvement in accuracy for the phantoms simulated in the study. A NN was also constructed to predict the error in path calculation, thus enabling a criterion to filter out proton events that may have a negative effect on the quality of the reconstructed image. By parametrizing a large set of synthetic data, the Machine Learning models were proved capable to bringâin an indirect and time efficient wayâthe accuracy of the MC method into the problem of proton tracking
A likelihood-based particle imaging filter using prior information
Background: Particle imaging can increase precision in proton and ion therapy. Interactions with nuclei in the imaged object increase image noise and reduce image quality, especially for multinucleon ions that can fragment, such as helium. Purpose: This work proposes a particle imaging filter, referred to as the Prior Filter, based on using prior information in the form of an estimated relative stopping power (RSP) map and the principles of electromagnetic interaction, to identify particles that have undergone nuclear interaction. The particles identified as having undergone nuclear interactions are then excluded from the image reconstruction, reducing the image noise. Methods: The Prior Filter uses FermiâEyges scattering and TschalĂ€r straggling theories to determine the likelihood that a particle only interacts electromagnetically. A threshold is then set to reject those particles with a low likelihood. The filter was evaluated and compared with a filter that estimates this likelihood based on the measured distribution of energy and scattering angle within pixels, commonly implemented as the 3Ï filter. Reconstructed radiographs from simulated data of a 20-cm water cylinder and an anthropomorphic chest phantom were generated with both protons and helium ions to assess the effect of the filters on noise reduction. The simulation also allowed assessment of secondary particle removal through the particle histories. Experimental data were acquired of the Catphan CTP 404 Sensitometry phantom using the U.S. proton CT (pCT) collaboration prototype scanner. The proton and helium images were filtered with both the prior filtering method and a state-of-the-art method including an implementation of the 3Ï filter. For both cases, a dE-E telescope filter, designed for this type of detector, was also applied. Results: The proton radiographs showed a small reduction in noise (1 mm of water-equivalent thickness [WET]) but a larger reduction in helium radiographs (up to 5â6 mm of WET) due to better secondary filtering. The proton and helium CT images reflected this, with similar noise at the center of the phantom (0.02 RSP) for the proton images and an RSP noise of 0.03 for the proposed filter and 0.06 for the 3Ï filter in the helium images. Images reconstructed from data with a dose reduction, up to a factor of 9, maintained a lower noise level using the Prior Filter over the state-of-the-art filtering method. Conclusions: The proposed filter results in images with equal or reduced noise compared to those that have undergone a filtering method typical of current particle imaging studies. This work also demonstrates that the proposed filter maintains better performance against the state of the art with up to a nine-fold dose reduction
OCTAVA: An open-source toolbox for quantitative analysis of optical coherence tomography angiography images
Optical coherence tomography angiography (OCTA) performs non-invasive visualization and characterization of microvasculature in research and clinical applications mainly in ophthalmology and dermatology. A wide variety of instruments, imaging protocols, processing methods and metrics have been used to describe the microvasculature, such that comparing different study outcomes is currently not feasible. With the goal of contributing to standardization of OCTA data analysis, we report a user-friendly, open-source toolbox, OCTAVA (OCTA Vascular Analyzer), to automate the pre-processing, segmentation, and quantitative analysis of en face OCTA maximum intensity projection images in a standardized workflow. We present each analysis step, including optimization of filtering and choice of segmentation algorithm, and definition of metrics. We perform quantitative analysis of OCTA images from different commercial and non-commercial instruments and samples and show OCTAVA can accurately and reproducibly determine metrics for characterization of microvasculature. Wide adoption could enable studies and aggregation of data on a scale sufficient to develop reliable microvascular biomarkers for early detection, and to guide treatment, of microvascular disease