57 research outputs found

    ExploreASL: An image processing pipeline for multi-center ASL perfusion MRI studies

    Get PDF
    Arterial spin labeling (ASL) has undergone significant development since its inception, with a focus on improving standardization and reproducibility of its acquisition and quantification. In a community-wide effort towards robust and reproducible clinical ASL image processing, we developed the software package ExploreASL, allowing standardized analyses across centers and scanners. The procedures used in ExploreASL capitalize on published image processing advancements and address the challenges of multi-center datasets with scanner-specific processing and artifact reduction to limit patient exclusion. ExploreASL is self-contained, written in MATLAB and based on Statistical Parameter Mapping (SPM) and runs on multiple operating systems. To facilitate collaboration and data-exchange, the toolbox follows several standards and recommendations for data structure, provenance, and best analysis practice. ExploreASL was iteratively refined and tested in the analysis of >10,000 ASL scans using different pulse-sequences in a variety of clinical populations, resulting in four processing modules: Import, Structural, ASL, and Population that perform tasks, respectively, for data curation, structural and ASL image processing and quality control, and finally preparing the results for statistical analyses on both single-subject and group level. We illustrate ExploreASL processing results from three cohorts: perinatally HIV-infected children, healthy adults, and elderly at risk for neurodegenerative disease. We show the reproducibility for each cohort when processed at different centers with different operating systems and MATLAB versions, and its effects on the quantification of gray matter cerebral blood flow. ExploreASL facilitates the standardization of image processing and quality control, allowing the pooling of cohorts which may increase statistical power and discover between-group perfusion differences. Ultimately, this workflow may advance ASL for wider adoption in clinical studies, trials, and practice

    Development and implementation of efficient noise suppression methods for emission computed tomography

    Get PDF
    In PET and SPECT imaging, iterative reconstruction is now widely used due to its capability of incorporating into the reconstruction process a physics model and Bayesian statistics involved in photon detection. Iterative reconstruction methods rely on regularization terms to suppress image noise and render radiotracer distribution with good image quality. The choice of regularization method substantially affects the appearances of reconstructed images, and is thus a critical aspect of the reconstruction process. Major contributions of this work include implementation and evaluation of various new regularization methods. Previously, our group developed a preconditioned alternating projection algorithm (PAPA) to optimize the emission computed tomography (ECT) objective function with the non-differentiable total variation (TV) regularizer. The algorithm was modified to optimize the proposed reconstruction objective functions. First, two novel TV-based regularizers—high-order total variation (HOTV) and infimal convolution total variation (ICTV)—were proposed as alternative choices to the customary TV regularizer in SPECT reconstruction, to reduce “staircase” artifacts produced by TV. We have evaluated both proposed reconstruction methods (HOTV-PAPA and ICTV-PAPA), and compared them with the TV regularized reconstruction (TV-PAPA) and the clinical standard, Gaussian post-filtered, expectation-maximization reconstruction method (GPF-EM) using both Monte Carlo-simulated data and anonymized clinical data. Model-observer studies using Monte Carlo-simulated data indicate that ICTV-PAPA is able to reconstruct images with similar or better lesion detectability, compared with clinical standard GPF-EM methods, but at lower detected count levels. This implies that switching from GPF-EM to ICTV-PAPA can reduce patient dose while maintaining image quality for diagnostic use. Second, the 1 norm of discrete cosine transform (DCT)-induced framelet regularization was studied. We decomposed the image into high and low spatial-frequency components, and then preferentially penalized the high spatial-frequency components. The DCT-induced framelet transform of the natural radiotracer distribution image is sparse. By using this property, we were able to effectively suppress image noise without overly compromising spatial resolution or image contrast. Finally, the fractional norm of the first-order spatial gradient was introduced as a regularizer. We implemented 2/3 and 1/2 norms to suppress image spatial variability. Due to the strong penalty of small differences between neighboring pixels, fractional-norm regularizers suffer from similar cartoon-like artifacts as with the TV regularizer. However, when penalty weights are properly selected, fractional-norm regularizers outperform TV in terms of noise suppression and contrast recovery

    ExploreASL: an image processing pipeline for multi-center ASL perfusion MRI studies

    Get PDF
    Arterial spin labeling (ASL) has undergone significant development since its inception, with a focus on improving standardization and reproducibility of its acquisition and quantification. In a community-wide effort towards robust and reproducible clinical ASL image processing, we developed the software package ExploreASL, allowing standardized analyses across centers and scanners. The procedures used in ExploreASL capitalize on published image processing advancements and address the challenges of multi-center datasets with scanner-specific processing and artifact reduction to limit patient exclusion. ExploreASL is self-contained, written in MATLAB and based on Statistical Parameter Mapping (SPM) and runs on multiple operating systems. To facilitate collaboration and data-exchange, the toolbox follows several standards and recommendations for data structure, provenance, and best analysis practice. ExploreASL was iteratively refined and tested in the analysis of >10,000 ASL scans using different pulse-sequences in a variety of clinical populations, resulting in four processing modules: Import, Structural, ASL, and Population that perform tasks, respectively, for data curation, structural and ASL image processing and quality control, and finally preparing the results for statistical analyses on both single-subject and group level. We illustrate ExploreASL processing results from three cohorts: perinatally HIV-infected children, healthy adults, and elderly at risk for neurodegenerative disease. We show the reproducibility for each cohort when processed at different centers with different operating systems and MATLAB versions, and its effects on the quantification of gray matter cerebral blood flow. ExploreASL facilitates the standardization of image processing and quality control, allowing the pooling of cohorts which may increase statistical power and discover between-group perfusion differences. Ultimately, this workflow may advance ASL for wider adoption in clinical studies, trials, and practice

    A Deconvolution Framework with Applications in Medical and Biological Imaging

    Get PDF
    A deconvolution framework is presented in this thesis and applied to several problems in medical and biological imaging. The framework is designed to contain state of the art deconvolution methods, to be easily expandable and to combine different components arbitrarily. Deconvolution is an inverse problem and in order to cope with its ill-posed nature, suitable regularization techniques and additional restrictions are required. A main objective of deconvolution methods is to restore degraded images acquired by fluorescence microscopy which has become an important tool in biological and medical sciences. Fluorescence microscopy images are degraded by out-of-focus blurring and noise and the deconvolution algorithms to restore these images are usually called deblurring methods. Many deblurring methods were proposed to restore these images in the last decade which are part of the deconvolution framework. In addition, existing deblurring techniques are improved and new components for the deconvolution framework are developed. A considerable improvement could be obtained by combining a state of the art regularization technique with an additional non-negativity constraint. A real biological screen analysing a specific protein in human cells is presented and shows the need to analyse structural information of fluorescence images. Such an analysis requires a good image quality which is the aim of the deblurring methods if the required image quality is not given. For a reliable understanding of cells and cellular processes, high resolution 3D images of the investigated cells are necessary. However, the ability of fluorescence microscopes to image a cell in 3D is limited since the resolution along the optical axis is by a factor of three worse than the transversal resolution. Standard microscopy image deblurring techniques are able to improve the resolution but the problem of a lower resolution in direction along the optical axis remains. It is however possible to overcome this problem using Axial Tomography providing tilted views of the object by rotating it under the microscope. The rotated images contain additional information about the objects which can be used to improve the resolution along the optical axis. In this thesis, a sophisticated method to reconstruct a high resolution Axial Tomography image on basis of the developed deblurring methods is presented. The deconvolution methods are also used to reconstruct the dose distribution in proton therapy on basis of measured PET images. Positron emitters are activated by proton beams but a PET image is not directly proportional to the delivered radiation dose distribution. A PET signal can be predicted by a convolution of the planned dose with specific filter functions. In this thesis, a dose reconstruction method based on PET images which reverses the convolution approach is presented and the potential to reconstruct the actually delivered dose distribution from measured PET images is investigated. Last but not least, a new denoising method using higher-order statistic information of a given Gaussian noise signal is presented and compared to state of the art denoising methods

    Machine Learning And Image Processing For Noise Removal And Robust Edge Detection In The Presence Of Mixed Noise

    Get PDF
    The central goal of this dissertation is to design and model a smoothing filter based on the random single and mixed noise distribution that would attenuate the effect of noise while preserving edge details. Only then could robust, integrated and resilient edge detection methods be deployed to overcome the ubiquitous presence of random noise in images. Random noise effects are modeled as those that could emanate from impulse noise, Gaussian noise and speckle noise. In the first step, evaluation of methods is performed based on an exhaustive review on the different types of denoising methods which focus on impulse noise, Gaussian noise and their related denoising filters. These include spatial filters (linear, non-linear and a combination of them), transform domain filters, neural network-based filters, numerical-based filters, fuzzy based filters, morphological filters, statistical filters, and supervised learning-based filters. In the second step, switching adaptive median and fixed weighted mean filter (SAMFWMF) which is a combination of linear and non-linear filters, is introduced in order to detect and remove impulse noise. Then, a robust edge detection method is applied which relies on an integrated process including non-maximum suppression, maximum sequence, thresholding and morphological operations. The results are obtained on MRI and natural images. In the third step, a combination of transform domain-based filter which is a combination of dual tree – complex wavelet transform (DT-CWT) and total variation, is introduced in order to detect and remove Gaussian noise as well as mixed Gaussian and Speckle noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on medical ultrasound and natural images. In the fourth step, a smoothing filter, which is a feed-forward convolutional network (CNN) is introduced to assume a deep architecture, and supported through a specific learning algorithm, l2 loss function minimization, a regularization method, and batch normalization all integrated in order to detect and remove impulse noise as well as mixed impulse and Gaussian noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on natural images for both specific and non-specific noise-level

    ExploreASL: an image processing pipeline for multi-center ASL perfusion MRI studies

    Get PDF
    Arterial spin labeling (ASL) has undergone significant development since its inception, with a focus on improving standardization and reproducibility of its acquisition and quantification. In a community-wide effort towards robust and reproducible clinical ASL image processing, we developed the software package ExploreASL, allowing standardized analyses across centers and scanners.The procedures used in ExploreASL capitalize on published image processing advancements and address the challenges of multi-center datasets with scanner-specific processing and artifact reduction to limit patient exclusion. ExploreASL is self-contained, written in MATLAB and based on Statistical Parameter Mapping (SPM) and runs on multiple operating systems. The toolbox adheres to previously defined international standards for data structure, provenance, and best analysis practice.ExploreASL was iteratively refined and tested in the analysis of >10,000 ASL scans using different pulse-sequences in a variety of clinical populations, resulting in four processing modules: Import, Structural, ASL, and Population that perform tasks, respectively, for data curation, structural and ASL image processing and quality control, and finally preparing the results for statistical analyses on both single-subject and group level. We illustrate ExploreASL processing results from three cohorts: perinatally HIV-infected children, healthy adults, and elderly at risk for neurodegenerative disease. We show the reproducibility for each cohort when processed at different centers with different operating systems and MATLAB versions, and its effects on the quantification of gray matter cerebral blood flow.ExploreASL facilitates the standardization of image processing and quality control, allowing the pooling of cohorts to increase statistical power and discover between-group perfusion differences. Ultimately, this workflow may advance ASL for wider adoption in clinical studies, trials, and practice

    Variable Splitting as a Key to Efficient Image Reconstruction

    Get PDF
    The problem of reconstruction of digital images from their degraded measurements has always been a problem of central importance in numerous applications of imaging sciences. In real life, acquired imaging data is typically contaminated by various types of degradation phenomena which are usually related to the imperfections of image acquisition devices and/or environmental effects. Accordingly, given the degraded measurements of an image of interest, the fundamental goal of image reconstruction is to recover its close approximation, thereby "reversing" the effect of image degradation. Moreover, the massive production and proliferation of digital data across different fields of applied sciences creates the need for methods of image restoration which would be both accurate and computationally efficient. Developing such methods, however, has never been a trivial task, as improving the accuracy of image reconstruction is generally achieved at the expense of an elevated computational burden. Accordingly, the main goal of this thesis has been to develop an analytical framework which allows one to tackle a wide scope of image reconstruction problems in a computationally efficient manner. To this end, we generalize the concept of variable splitting, as a tool for simplifying complex reconstruction problems through their replacement by a sequence of simpler and therefore easily solvable ones. Moreover, we consider two different types of variable splitting and demonstrate their connection to a number of existing approaches which are currently used to solve various inverse problems. In particular, we refer to the first type of variable splitting as Bregman Type Splitting (BTS) and demonstrate its applicability to the solution of complex reconstruction problems with composite, cross-domain constraints. As specific applications of practical importance, we consider the problem of reconstruction of diffusion MRI signals from sub-critically sampled, incomplete data as well as the problem of blind deconvolution of medical ultrasound images. Further, we refer to the second type of variable splitting as Fuzzy Clustering Splitting (FCS) and show its application to the problem of image denoising. Specifically, we demonstrate how this splitting technique allows us to generalize the concept of neighbourhood operation as well as to derive a unifying approach to denoising of imaging data under a variety of different noise scenarios

    4D imaging in tomography and optical nanoscopy

    Full text link
    Diese Dissertation gehört zu den Gebieten mathematische Bildverarbeitung und inverse Probleme. Ein inverses Problem ist die Aufgabe, Modellparameter anhand von gemessenen Daten zu berechnen. Solche Probleme treten in zahlreichen Anwendungen in Wissenschaft und Technik auf, z.B. in medizinischer Bildgebung, Biophysik oder Astronomie. Wir betrachten Rekonstruktionsprobleme mit Poisson Rauschen in der Tomographie und optischen Nanoskopie. Bei letzterer gilt es Bilder ausgehend von verzerrten und verrauschten Messungen zu rekonstruieren, wohingegen in der Positronen-Emissions-Tomographie die Aufgabe in der Visualisierung physiologischer Prozesse eines Patienten besteht. Standardmethoden zur 3D Bildrekonstruktion berücksichtigen keine zeitabhängigen Informationen oder Dynamik, z.B. Herzschlag oder Atmung in der Tomographie oder Zellmigration in der Mikroskopie. Diese Dissertation behandelt Modelle, Analyse und effiziente Algorithmen für 3D und 4D zeitabhängige inverse Probleme. This thesis contributes to the field of mathematical image processing and inverse problems. An inverse problem is a task, where the values of some model parameters must be computed from observed data. Such problems arise in a wide variety of applications in sciences and engineering, such as medical imaging, biophysics or astronomy. We mainly consider reconstruction problems with Poisson noise in tomography and optical nanoscopy. In the latter case, the task is to reconstruct images from blurred and noisy measurements, whereas in positron emission tomography the task is to visualize physiological processes of a patient. In 3D static image reconstruction standard methods do not incorporate time-dependent information or dynamics, e.g. heart beat or breathing in tomography or cell motion in microscopy. This thesis is a treatise on models, analysis and efficient algorithms to solve 3D and 4D time-dependent inverse problems
    corecore