305 research outputs found

    Practical considerations for in vivo MRI with higher dimensional spatial encoding

    Get PDF
    Object: This work seeks to examine practical aspects of in vivo imaging when spatial encoding is performed with three or more encoding channels for a 2D image. Materials and methods: The recently developed 4-Dimensional Radial In/Out (4D-RIO) trajectory is compared in simulations to an alternative higher-order encoding scheme referred to as O-space imaging. Direct comparison of local k-space representations leads to the proposal of a modification to the O-space imaging trajectory based on a scheme of prephasing to improve the reconstructed image quality. Data were collected using a 4D-RIO acquisition in vivo in the human brain and several image reconstructions were compared, exploiting the property that the dense encoding matrix, after a 1D or 2D Fourier transform, can be approximated by a sparse matrix by discarding entries below a chosen magnitude. Results: The proposed prephasing scheme for the O-space trajectory shows a marked improvement in quality in the simulated image reconstruction. In experiments, 4D-RIO data acquired in vivo in the human brain can be reconstructed to a reasonable quality using only 5% of the encoding matrix—massively reducing computer memory requirements for a practical reconstruction. Conclusion: Trajectory design and reconstruction techniques such as these may prove especially useful when extending generalized higher-order encoding methods to 3D image

    Numerical reconstruction of brain tumours

    Get PDF
    We propose a nonlinear Landweber method for the inverse problem of locating the brain tumour source (origin where the tumour formed) based on well-established models of reaction–diffusion type for brain tumour growth. The approach consists of recovering the initial density of the tumour cells starting from a later state, which can be given by a medical image, by running the model backwards. Moreover, full three-dimensional simulations are given of the tumour source localization on two types of data, the three-dimensional Shepp–Logan phantom and an MRI T1-weighted brain scan. These simulations are obtained using standard finite difference discretizations of the space and time derivatives, generating a simple approach that performs well

    Improved cerebellar tissue classification on magnetic resonance images of brain.

    Get PDF
    PURPOSE: To develop and implement a method for improved cerebellar tissue classification on the MRI of brain by automatically isolating the cerebellum prior to segmentation. MATERIALS AND METHODS: Dual fast spin echo (FSE) and fluid attenuation inversion recovery (FLAIR) images were acquired on 18 normal volunteers on a 3 T Philips scanner. The cerebellum was isolated from the rest of the brain using a symmetric inverse consistent nonlinear registration of individual brain with the parcellated template. The cerebellum was then separated by masking the anatomical image with individual FLAIR images. Tissues in both the cerebellum and rest of the brain were separately classified using hidden Markov random field (HMRF), a parametric method, and then combined to obtain tissue classification of the whole brain. The proposed method for tissue classification on real MR brain images was evaluated subjectively by two experts. The segmentation results on Brainweb images with varying noise and intensity nonuniformity levels were quantitatively compared with the ground truth by computing the Dice similarity indices. RESULTS: The proposed method significantly improved the cerebellar tissue classification on all normal volunteers included in this study without compromising the classification in remaining part of the brain. The average similarity indices for gray matter (GM) and white matter (WM) in the cerebellum are 89.81 (+/-2.34) and 93.04 (+/-2.41), demonstrating excellent performance of the proposed methodology. CONCLUSION: The proposed method significantly improved tissue classification in the cerebellum. The GM was overestimated when segmentation was performed on the whole brain as a single object

    Prognostic Significance of Growth Kinetics in Newly Diagnosed Glioblastomas Revealed by Combining Serial Imaging with a Novel Biomathematical Model

    Get PDF
    Glioblastomas (GBMs) are the most aggressive primary brain tumors characterized by their rapid proliferation and diffuse infiltration of the brain tissue. Survival patterns in patients with GBM have been associated with a number of clinico-pathologic factors, including age and neurological status, yet a significant quantitative link to in vivo growth kinetics of each glioma has remained elusive. Exploiting a recently developed tool for quantifying glioma net proliferation and invasion rates in individual patients using routinely available magnetic resonance images (MRIs), we propose to link these patient-specific kinetic rates of biological aggressiveness to prognostic significance. Using our biologically-based mathematical model for glioma growth and invasion, examination of serial pre-treatment MRIs of 32 GBM patients allowed quantification of these rates for each patient’s tumor. Survival analyses revealed that even when controlling for standard clinical parameters (e.g., age, KPS) these model-defined parameters quantifying biologically aggressiveness (net proliferation and invasion rates) were significantly associated with prognosis. One hypothesis generated was that the ratio of the actual survival time after whatever therapies were employed to the duration of survival predicted (by the model) without any therapy would provide a “Therapeutic Response Index” (TRI) of the overall effectiveness of the therapies. The TRI may provided important information, not otherwise available, as to the effectiveness of the treatments in individual patients. To our knowledge, this is the first report indicating that dynamic insight from routinely obtained pre-treatment imaging may be quantitatively useful in characterizing survival of individual patients with GBM. Such a hybrid tool bridging mathematical modeling and clinical imaging may allow for statifying patients for clinical studies relative to their pretreatment biological aggressiveness

    Retrieval orientation and the control of recollection: An FMRI study

    Get PDF
    The present study used event-related fMRI to examine the impact of the adoption of different retrieval orientations on the neural correlates of recollection. In each of two study-test blocks, subjects encoded a mixed list of words and pictures, and then performed a recognition memory task with words as the test items. In one block, the requirement was to respond positively to test items corresponding to studied words, and to reject both new items and items corresponding to the studied pictures. In the other block, positive responses were made to test items corresponding to pictures, and items corresponding to words were classified along with the new items. Based on previous event-related potential (ERP) findings, we predicted that in the word task, recollection-related effects would be found for target information only. This prediction was fulfilled. In both tasks, targets elicited the characteristic pattern of recollection-related activity. By contrast, non-targets elicited this pattern in the picture task, but not in the word task. Importantly, the left angular gyrus was among the regions demonstrating this dissociation of non-target recollection effects according to retrieval orientation. The findings for the angular gyrus parallel prior findings for the `left-parietal' ERP old/new effect, and add to the evidence that the effect reflects recollection-related neural activity originating in left ventral parietal cortex. Thus, the results converge with the previous ERP findings to suggest that the processing of retrieval cues can be constrained to prevent the retrieval of goal-irrelevant information

    Technical note: development of a 3D printed subresolution sandwich phantom for validation of brain SPECT analysis

    Get PDF
    Purpose: To make an adaptable, head shaped radionuclide phantom to simulate molecular imaging of the brain using clinical acquisition and reconstruction protocols. This will allow the characterization and correction of scanner characteristics, and improve the accuracy of clinical image analysis, including the application of databases of normal subjects. Methods: A fused deposition modeling 3D printer was used to create a head shaped phantom made up of transaxial slabs, derived from a simulated MRI dataset. The attenuation of the printed polylactide (PLA), measured by means of the Hounsfield unit on CT scanning, was set to match that of the brain by adjusting the proportion of plastic filament and air (fill ratio). Transmission measurements were made to verify the attenuation of the printed slabs. The radionuclide distribution within the phantom was created by adding 99mTc pertechnetate to the ink cartridge of a paper printer and printing images of gray and white matter anatomy, segmented from the same MRI data. The complete subresolution sandwich phantom was assembled from alternate 3D printed slabs and radioactive paper sheets, and then imaged on a dual headed gamma camera to simulate an HMPAO SPECT scan. Results: Reconstructions of phantom scans successfully used automated ellipse fitting to apply attenuation correction. This removed the variability inherent in manual application of attenuation correction and registration inherent in existing cylindrical phantom designs. The resulting images were assessed visually and by count profiles and found to be similar to those from an existing elliptical PMMA phantom. Conclusions: The authors have demonstrated the ability to create physically realistic HMPAO SPECT simulations using a novel head-shaped 3D printed subresolution sandwich method phantom. The phantom can be used to validate all neurological SPECT imaging applications. A simple modification of the phantom design to use thinner slabs would make it suitable for use in PET

    Automatic segmentation of myocardium from black-blood MR images using entropy and local neighborhood information.

    Get PDF
    By using entropy and local neighborhood information, we present in this study a robust adaptive Gaussian regularizing Chan-Vese (CV) model to segment the myocardium from magnetic resonance images with intensity inhomogeneity. By utilizing the circular Hough transformation (CHT) our model is able to detect epicardial and endocardial contours of the left ventricle (LV) as circles automatically, and the circles are used as the initialization. In the cost functional of our model, the interior and exterior energies are weighted by the entropy to improve the robustness of the evolving curve. Local neighborhood information is used to evolve the level set function to reduce the impact of the heterogeneity inside the regions and to improve the segmentation accuracy. An adaptive window is utilized to reduce the sensitivity to initialization. The Gaussian kernel is used to regularize the level set function, which can not only ensure the smoothness and stability of the level set function, but also eliminate the traditional Euclidean length term and re-initialization. Extensive validation of the proposed method on patient data demonstrates its superior performance over other state-of-the-art methods

    A Comparative Study of Modern Inference Techniques for Structured Discrete Energy Minimization Problems

    Get PDF
    International audienceSzeliski et al. published an influential study in 2006 on energy minimization methods for Markov Random Fields (MRF). This study provided valuable insights in choosing the best optimization technique for certain classes of problems. While these insights remain generally useful today, the phenomenal success of random field models means that the kinds of inference problems that have to be solved changed significantly. Specifically , the models today often include higher order interactions, flexible connectivity structures, large label-spaces of different car-dinalities, or learned energy tables. To reflect these changes, we provide a modernized and enlarged study. We present an empirical comparison of more than 27 state-of-the-art optimization techniques on a corpus of 2,453 energy minimization instances from diverse applications in computer vision. To ensure reproducibility, we evaluate all methods in the OpenGM 2 framework and report extensive results regarding runtime and solution quality. Key insights from our study agree with the results of Szeliski et al. for the types of models they studied. However, on new and challenging types of models our findings disagree and suggest that polyhedral methods and integer programming solvers are competitive in terms of runtime and solution quality over a large range of model types
    • 

    corecore