416 research outputs found

    Auto-Regressive Discrete Acquisition Points Transformation for Diffusion Weighted MRI Data

    Get PDF
    Objective: A new method for fitting diffusion-weighted magnetic resonance imaging (DW-MRI) data composed of an unknown number of multi-exponential components is presented and evaluated. Methods: The auto-regressive discrete acquisition points transformation (ADAPT) method is an adaption of the auto-regressive moving average system, which allows for the modeling of multi-exponential data and enables the estimation of the number of exponential components without prior assumptions. ADAPT was evaluated on simulated DW-MRI data. The optimum ADAPT fit was then applied to human brain DWI data and the correlation between the ADAPT coefficients and the parameters of the commonly used bi-exponential intravoxel incoherent motion (IVIM) method were investigated. Results: The ADAPT method can correctly identify the number of components and model the exponential data. The ADAPT coefficients were found to have strong correlations with the IVIM parameters. ADAPT(1,1)-β0 correlated with IVIM-D: ρ = 0.708, P <; 0.001. ADAPT(1,1)-α1 correlated with IVIM-f: ρ = 0.667, P <; 0.001. ADAPT(1,1)-β1 correlated with IVIM-D * : ρ = 0.741, P <; 0.001). Conclusion: ADAPT provides a method that can identify the number of exponential components in DWI data without prior assumptions, and determine potential complex diffusion biomarkers. Significance: ADAPT has the potential to provide a generalized fitting method for discrete multi-exponential data, and determine meaningful coefficients without prior information

    The development of a novel MRI based method for measuring blood perfusion in neurovascular damage

    Get PDF
    Diffusion-weighted magnetic resonance imaging (DWI) is a key neuroimaging technique. Multi b-value DWI is composed of an unknown number of exponential components which represent water movement in various compartments, notably tissue and blood vessels. The bi-exponential model, Intravoxel Incoherent Motion (IVIM), is commonly used to fit the perfusion component but does not take account of the multi-component nature of the data. In this work, a new fitting method, the Auto-Regressive Discrete Acquisition Points Transformation (ADAPT) was developed and evaluated on simulated, phantom, volunteer and clinical DWI data. ADAPT is based on the auto-regressive moving average model, making no prior assumptions about the data. ADAPT demonstrated that it could correctly identify the number of components within the diffusion signal. The ADAPT coefficients demonstrated a significant correlation with IVIM parameters and a significantly stronger correlation with cerebral blood volume derived from dynamic susceptibility contrast MRI. A reformulation of the ADAPT method allowed the IVIM parameters to be mathematically derived from the diffusion signal and demonstrated lower bias and more accuracy than currently implemented fitting methods, which are inherently biased. ADAPT provides a novel method for non-invasive determination of diffusion and perfusion biomarkers from complex tissues

    Nephroblastoma in MRI Data

    Get PDF
    The main objective of this work is the mathematical analysis of nephroblastoma in MRI sequences. At the beginning we provide two different datasets for segmentation and classification. Based on the first dataset, we analyze the current clinical practice regarding therapy planning on the basis of annotations of a single radiologist. We can show with our benchmark that this approach is not optimal and that there may be significant differences between human annotators and even radiologists. In addition, we demonstrate that the approximation of the tumor shape currently used is too coarse granular and thus prone to errors. We address this problem and develop a method for interactive segmentation that allows an intuitive and accurate annotation of the tumor. While the first part of this thesis is mainly concerned with the segmentation of Wilms’ tumors, the second part deals with the reliability of diagnosis and the planning of the course of therapy. The second data set we compiled allows us to develop a method that dramatically improves the differential diagnosis between nephroblastoma and its precursor lesion nephroblastomatosis. Finally, we can show that even the standard MRI modality for Wilms’ tumors is sufficient to estimate the developmental tendencies of nephroblastoma under chemotherapy

    On noise, uncertainty and inference for computational diffusion MRI

    Get PDF
    Diffusion Magnetic Resonance Imaging (dMRI) has revolutionised the way brain microstructure and connectivity can be studied. Despite its unique potential in mapping the whole brain, biophysical properties are inferred from measurements rather than being directly observed. This indirect mapping from noisy data creates challenges and introduces uncertainty in the estimated properties. Hence, dMRI frameworks capable to deal with noise and uncertainty quantification are of great importance and are the topic of this thesis. First, we look into approaches for reducing uncertainty, by de-noising the dMRI signal. Thermal noise can have detrimental effects for modalities where the information resides in the signal attenuation, such as dMRI, that has inherently low-SNR data. We highlight the dual effect of noise, both in increasing variance, but also introducing bias. We then design a framework for evaluating denoising approaches in a principled manner. By setting objective criteria based on what a well-behaved denoising algorithm should offer, we provide a bespoke dataset and a set of evaluations. We demonstrate that common magnitude-based denoising approaches usually reduce noise-related variance from the signal, but do not address the bias effects introduced by the noise floor. Our framework also allows to better characterise scenarios where denoising can be beneficial (e.g. when done in complex domain) and can open new opportunities, such as pushing spatio-temporal resolution boundaries. Subsequently, we look into approaches for mapping uncertainty and design two inference frameworks for dMRI models, one using classical Bayesian methods and another using more recent data-driven algorithms. In the first approach, we build upon the univariate random-walk Metropolis-Hastings MCMC, an extensively used sampling method to sample from the posterior distribution of model parameters given the data. We devise an efficient adaptive multivariate MCMC scheme, relying upon the assumption that groups of model parameters can be jointly estimated if a proper covariance matrix is defined. In doing so, our algorithm increases the sampling efficiency, while preserving accuracy and precision of estimates. We show results using both synthetic and in-vivo dMRI data. In the second approach, we resort to Simulation-Based Inference (SBI), a data-driven approach that avoids the need for iterative model inversions. This is achieved by using neural density estimators to learn the inverse mapping from the forward generative process (simulations) to the parameters of interest that have generated those simulations. By addressing the problem via learning approaches offers the opportunity to achieve inference amortisation, boosting efficiency by avoiding the necessity of repeating the inference process for each new unseen dataset. It also allows inversion of forward processes (i.e. a series of processing steps) rather than only models. We explore different neural network architectures to perform conditional density estimation of the posterior distribution of parameters. Results and comparisons obtained against MCMC suggest speed-ups of 2-3 orders of magnitude in the inference process while keeping the accuracy in the estimates

    Incorporating radiomics into clinical trials: expert consensus on considerations for data-driven compared to biologically-driven quantitative biomarkers

    Get PDF
    Existing Quantitative Imaging Biomarkers (QIBs) are associated with known biological tissue characteristics and follow a well-understood path of technical, biological and clinical validation before incorporation into clinical trials. In radiomics, novel data-driven processes extract numerous visually imperceptible statistical features from the imaging data with no a priori assumptions on their correlation with biological processes. The selection of relevant features (radiomic signature) and incorporation into clinical trials therefore requires additional considerations to ensure meaningful imaging endpoints. Also, the number of radiomic features tested means that power calculations would result in sample sizes impossible to achieve within clinical trials. This article examines how the process of standardising and validating data-driven imaging biomarkers differs from those based on biological associations. Radiomic signatures are best developed initially on datasets that represent diversity of acquisition protocols as well as diversity of disease and of normal findings, rather than within clinical trials with standardised and optimised protocols as this would risk the selection of radiomic features being linked to the imaging process rather than the pathology. Normalisation through discretisation and feature harmonisation are essential pre-processing steps. Biological correlation may be performed after the technical and clinical validity of a radiomic signature is established, but is not mandatory. Feature selection may be part of discovery within a radiomics-specific trial or represent exploratory endpoints within an established trial; a previously validated radiomic signature may even be used as a primary/secondary endpoint, particularly if associations are demonstrated with specific biological processes and pathways being targeted within clinical trials

    Texture analysis and Its applications in biomedical imaging: a survey

    Get PDF
    Texture analysis describes a variety of image analysis techniques that quantify the variation in intensity and pattern. This paper provides an overview of several texture analysis approaches addressing the rationale supporting them, their advantages, drawbacks, and applications. This survey’s emphasis is in collecting and categorising over five decades of active research on texture analysis.Brief descriptions of different approaches are presented along with application examples. From a broad range of texture analysis applications, this survey’s final focus is on biomedical image analysis. An up-to-date list of biological tissues and organs in which disorders produce texture changes that may be used to spot disease onset and progression is provided. Finally, the role of texture analysis methods as biomarkers of disease is summarised.Manuscript received February 3, 2021; revised June 23, 2021; accepted September 21, 2021. Date of publication September 27, 2021; date of current version January 24, 2022. This work was supported in part by the Portuguese Foundation for Science and Technology (FCT) under Grants PTDC/EMD-EMD/28039/2017, UIDB/04950/2020, PestUID/NEU/04539/2019, and CENTRO-01-0145-FEDER-000016 and by FEDER-COMPETE under Grant POCI-01-0145-FEDER-028039. (Corresponding author: Rui Bernardes.)info:eu-repo/semantics/publishedVersio

    Estimation of high-dimensional brain connectivity networks using functional magnetic resonance imaging data

    Get PDF
    Recent studies in neuroimaging show increasing interest in mapping the brain connectivity. It can be potentially useful as biomarkers in identifying neuropsychiatric diseases as well as tool for psychological studies. This study considers the problem of modeling high-dimensional brain connectivity using statistical approach and estimate the connectivity between functional magnetic resonance imaging (fMRI) time series data measured from brain regions. The high-dimension of fMRI data (N) corresponding to the number of brain regions, is typically much larger than sample size or the number of time points taken (T). In this setting, the conventional connectivity estimators such as sample covariance and least-square (LS) estimator are no longer consistent and reliable. In addition, the traditional analysis assumes the brain network to be timeinvariant but recent neuroimaging studies show brain connectivity is changing over the experimental time course. This study developed a novel shrinkage approach to characterize directed brain connectivity in high-dimension. The shrinkage method is involved in incorporating shrinkage-based estimators (Ledoit-Wolf (LW) and Rao- Blackwell LW (RBLW)) in the covariance matrix and LS-based linear regression fitting of vector autoregressive (VAR) model, to reduce the mean squared error of estimates in both high-dimensional functional and effective connectivity. This allows better conditioned and invertible estimated matrix which is important to generate a reliable estimator. Then, the shrinkage-based VAR estimator has been extended to estimate time-evolving effective brain connectivity. The shrinkage-based methods are evaluated via simulations and applied to fMRI resting-state data. Simulation results show reduced mean squared error of estimated connectivity matrix in LW and RBLWbased estimators as compared to conventional sample covariance and LS estimators in both static and dynamic connectivity analysis. These estimators show robustness towards the increasing dimension. Result on real resting-state fMRI data showed that the proposed methods are able to identify functionally-related resting-state brain connectivity networks and evolution of connectivity states across time. It provides additional insights into human whole-brain connectivity during at rest as compared to previous finding particularly in the directionality of connectivity in high-dimensional brain networks

    Incorporating radiomics into clinical trials: expert consensus on considerations for data-driven compared to biologically driven quantitative biomarkers

    Get PDF
    Existing quantitative imaging biomarkers (QIBs) are associated with known biological tissue characteristics and follow a well-understood path of technical, biological and clinical validation before incorporation into clinical trials. In radiomics, novel data-driven processes extract numerous visually imperceptible statistical features from the imaging data with no a priori assumptions on their correlation with biological processes. The selection of relevant features (radiomic signature) and incorporation into clinical trials therefore requires additional considerations to ensure meaningful imaging endpoints. Also, the number of radiomic features tested means that power calculations would result in sample sizes impossible to achieve within clinical trials. This article examines how the process of standardising and validating data-driven imaging biomarkers differs from those based on biological associations. Radiomic signatures are best developed initially on datasets that represent diversity of acquisition protocols as well as diversity of disease and of normal findings, rather than within clinical trials with standardised and optimised protocols as this would risk the selection of radiomic features being linked to the imaging process rather than the pathology. Normalisation through discretisation and feature harmonisation are essential pre-processing steps. Biological correlation may be performed after the technical and clinical validity of a radiomic signature is established, but is not mandatory. Feature selection may be part of discovery within a radiomics-specific trial or represent exploratory endpoints within an established trial; a previously validated radiomic signature may even be used as a primary/secondary endpoint, particularly if associations are demonstrated with specific biological processes and pathways being targeted within clinical trials.Radiolog
    corecore