1,437 research outputs found

    Using the robust principal component analysis algorithm to remove RF spike artifacts from MR images

    Get PDF
    Brief bursts of RF noise during MR data acquisition (“k-space spikes”) cause disruptive image artifacts, manifesting as stripes overlaid on the image. RF noise is often related to hardware problems, including vibrations during gradient-heavy sequences, such as diffusion-weighted imaging. In this study, we present an application of the Robust Principal Component Analysis (RPCA) algorithm to remove spike noise from k-space. Methods: Corrupted k-space matrices were decomposed into their low-rank and sparse components using the RPCA algorithm, such that spikes were contained within the sparse component and artifact-free k-space data remained in the low-rank component. Automated center refilling was applied to keep the peaked central cluster of k-space from misclassification in the sparse component. Results: This algorithm was demonstrated to effectively remove k-space spikes from four data types under conditions generating spikes: (i) mouse heart T1 mapping, (ii) mouse heart cine imaging, (iii) human kidney diffusion tensor imaging (DTI) data, and (iv) human brain DTI data. Myocardial T1 values changed by 86.1 ± 171 ms following despiking, and fractional anisotropy values were recovered following despiking of DTI data. Conclusion: The RPCA despiking algorithm will be a valuable postprocessing method for retrospectively removing stripe artifacts without affecting the underlying signal of interest. Magn Reson Med, 2015. © 2015 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited

    MRI Artefact Augmentation: Robust Deep Learning Systems and Automated Quality Control

    Get PDF
    Quality control (QC) of magnetic resonance imaging (MRI) is essential to establish whether a scan or dataset meets a required set of standards. In MRI, many potential artefacts must be identified so that problematic images can either be excluded or accounted for in further image processing or analysis. To date, the gold standard for the identification of these issues is visual inspection by experts. A primary source of MRI artefacts is caused by patient movement, which can affect clinical diagnosis and impact the accuracy of Deep Learning systems. In this thesis, I present a method to simulate motion artefacts from artefact-free images to augment convolutional neural networks (CNNs), increasing training appearance variability and robustness to motion artefacts. I show that models trained with artefact augmentation generalise better and are more robust to real-world artefacts, with negligible cost to performance on clean data. I argue that it is often better to optimise frameworks end-to-end with artefact augmentation rather than learning to retrospectively remove artefacts, thus enforcing robustness to artefacts at the feature level representation of the data. The labour-intensive and subjective nature of QC has increased interest in automated methods. To address this, I approach MRI quality estimation as the uncertainty in performing a downstream task, using probabilistic CNNs to predict segmentation uncertainty as a function of the input data. Extending this framework, I introduce a novel decoupled uncertainty model, enabling separate uncertainty predictions for different types of image degradation. Training with an extended k-space artefact augmentation pipeline, the model provides informative measures of uncertainty on problematic real-world scans classified by QC raters and enables sources of segmentation uncertainty to be identified. Suitable quality for algorithmic processing may differ from an image's perceptual quality. Exploring this, I pose MRI visual quality assessment as an image restoration task. Using Bayesian CNNs to recover clean images from noisy data, I show that the uncertainty indicates the possible recoverability of an image. A multi-task network combining uncertainty-aware artefact recovery with tissue segmentation highlights the distinction between visual and algorithmic quality, which has the impact that, depending on the downstream task, less data should be discarded for purely visual quality reasons

    Intensity Nonuniformity Correction for Brain MR Images with Known Voxel Classes

    Get PDF
    Intensity nonuniformity in magnetic resonance (MR) images, represented by a smooth and slowly varying function, is a typical artifact that is a nuisance for many image processing methods. To eliminate the artifact, we have to estimate the nonuniformity as a smooth and slowly varying function and factor it out from the given data. We reformulate the problem as a problem of finding a unique smooth function in a particular set of piecewise smooth functions and propose a variational method for finding it. We deliver the main idea using a simple one-dimensional example first and provide a thorough analysis of the problem in a three-phase scenario in three dimensions whose application can be found in the brain MR images. Experiments with synthetic and real MR images and a comparison with a state-of-the-art method, N3, show our algorithm???s satisfactory performance in estimating the nonuniformity with and without noise. An automated procedure is also proposed for practical use.open

    On Sensitivity and Robustness of Normalization Schemes to Input Distribution Shifts in Automatic MR Image Diagnosis

    Full text link
    Magnetic Resonance Imaging (MRI) is considered the gold standard of medical imaging because of the excellent soft-tissue contrast exhibited in the images reconstructed by the MRI pipeline, which in-turn enables the human radiologist to discern many pathologies easily. More recently, Deep Learning (DL) models have also achieved state-of-the-art performance in diagnosing multiple diseases using these reconstructed images as input. However, the image reconstruction process within the MRI pipeline, which requires the use of complex hardware and adjustment of a large number of scanner parameters, is highly susceptible to noise of various forms, resulting in arbitrary artifacts within the images. Furthermore, the noise distribution is not stationary and varies within a machine, across machines, and patients, leading to varying artifacts within the images. Unfortunately, DL models are quite sensitive to these varying artifacts as it leads to changes in the input data distribution between the training and testing phases. The lack of robustness of these models against varying artifacts impedes their use in medical applications where safety is critical. In this work, we focus on improving the generalization performance of these models in the presence of multiple varying artifacts that manifest due to the complexity of the MR data acquisition. In our experiments, we observe that Batch Normalization, a widely used technique during the training of DL models for medical image analysis, is a significant cause of performance degradation in these changing environments. As a solution, we propose to use other normalization techniques, such as Group Normalization and Layer Normalization (LN), to inject robustness into model performance against varying image artifacts. Through a systematic set of experiments, we show that GN and LN provide better accuracy for various MR artifacts and distribution shifts.Comment: Accepted at MIDL 202

    Online detection and sorting of extracellularly recorded action potentials in human medial temporal lobe recordings, in vivo

    Get PDF
    Understanding the function of complex cortical circuits requires the simultaneous recording of action potentials from many neurons in awake and behaving animals. Practically, this can be achieved by extracellularly recording from multiple brain sites using single wire electrodes. However, in densely packed neural structures such as the human hippocampus, a single electrode can record the activity of multiple neurons. Thus, analytic techniques that differentiate action potentials of different neurons are required. Offline spike sorting approaches are currently used to detect and sort action potentials after finishing the experiment. Because the opportunities to record from the human brain are relatively rare, it is desirable to analyze large numbers of simultaneous recordings quickly using online sorting and detection algorithms. In this way, the experiment can be optimized for the particular response properties of the recorded neurons. Here we present and evaluate a method that is capable of detecting and sorting extracellular single-wire recordings in realtime. We demonstrate the utility of the method by applying it to an extensive data set we acquired from chronically-implanted depth electrodes in the hippocampus of human epilepsy patients. This dataset is particularly challenging because it was recorded in a noisy clinical environment. This method will allow the development of closed-loop experiments, which immediately adapt the experimental stimuli and/or tasks to the neural response observed.Comment: 9 figures, 2 tables. Journal of Neuroscience Methods 2006 (in press). Journal of Neuroscience Methods, 2006 (in press

    Quality data assessment and improvement in pre-processing pipeline to minimize impact of spurious signals in functional magnetic imaging (fMRI)

    Get PDF
    In the recent years, the field of quality data assessment and signal denoising in functional magnetic resonance imaging (fMRI) is rapidly evolving and the identification and reduction of spurious signal with pre-processing pipeline is one of the most discussed topic. In particular, subject motion or physiological signals, such as respiratory or/and cardiac pulsatility, were showed to introduce false-positive activations in subsequent statistical analyses. Different measures for the evaluation of the impact of motion related artefacts, such as frame-wise displacement and root mean square of movement parameters, and the reduction of these artefacts with different approaches, such as linear regression of nuisance signals and scrubbing or censoring procedure, were introduced. However, we identify two main drawbacks: i) the different measures used for the evaluation of motion artefacts were based on user-dependent thresholds, and ii) each study described and applied their own pre-processing pipeline. Few studies analysed the effect of these different pipelines on subsequent analyses methods in task-based fMRI.The first aim of the study is to obtain a tool for motion fMRI data assessment, based on auto-calibrated procedures, to detect outlier subjects and outliers volumes, targeted on each investigated sample to ensure homogeneity of data for motion. The second aim is to compare the impact of different pre-processing pipelines on task-based fMRI using GLM based on recent advances in resting state fMRI preprocessing pipelines. Different output measures based on signal variability and task strength were used for the assessment

    Hand classification of fMRI ICA noise components

    Get PDF
    We present a practical "how-to" guide to help determine whether single-subject fMRI independent components (ICs) characterise structured noise or not. Manual identification of signal and noise after ICA decomposition is required for efficient data denoising: to train supervised algorithms, to check the results of unsupervised ones or to manually clean the data. In this paper we describe the main spatial and temporal features of ICs and provide general guidelines on how to evaluate these. Examples of signal and noise components are provided from a wide range of datasets (3T data, including examples from the UK Biobank and the Human Connectome Project, and 7T data), together with practical guidelines for their identification. Finally, we discuss how the data quality, data type and preprocessing can influence the characteristics of the ICs and present examples of particularly challenging datasets

    Multimodal image analysis of the human brain

    Get PDF
    Gedurende de laatste decennia heeft de snelle ontwikkeling van multi-modale en niet-invasieve hersenbeeldvorming technologieën een revolutie teweeg gebracht in de mogelijkheid om de structuur en functionaliteit van de hersens te bestuderen. Er is grote vooruitgang geboekt in het beoordelen van hersenschade door gebruik te maken van Magnetic Reconance Imaging (MRI), terwijl Elektroencefalografie (EEG) beschouwd wordt als de gouden standaard voor diagnose van neurologische afwijkingen. In deze thesis focussen we op de ontwikkeling van nieuwe technieken voor multi-modale beeldanalyse van het menselijke brein, waaronder MRI segmentatie en EEG bronlokalisatie. Hierdoor voegen we theorie en praktijk samen waarbij we focussen op twee medische applicaties: (1) automatische 3D MRI segmentatie van de volwassen hersens en (2) multi-modale EEG-MRI data analyse van de hersens van een pasgeborene met perinatale hersenschade. We besteden veel aandacht aan de verbetering en ontwikkeling van nieuwe methoden voor accurate en ruisrobuuste beeldsegmentatie, dewelke daarna succesvol gebruikt worden voor de segmentatie van hersens in MRI van zowel volwassen als pasgeborenen. Daarenboven ontwikkelden we een geïntegreerd multi-modaal methode voor de EEG bronlokalisatie in de hersenen van een pasgeborene. Deze lokalisatie wordt gebruikt voor de vergelijkende studie tussen een EEG aanval bij pasgeborenen en acute perinatale hersenletsels zichtbaar in MRI
    corecore