1,726 research outputs found

    Quantitative magnetic resonance image analysis via the EM algorithm with stochastic variation

    Full text link
    Quantitative Magnetic Resonance Imaging (qMRI) provides researchers insight into pathological and physiological alterations of living tissue, with the help of which researchers hope to predict (local) therapeutic efficacy early and determine optimal treatment schedule. However, the analysis of qMRI has been limited to ad-hoc heuristic methods. Our research provides a powerful statistical framework for image analysis and sheds light on future localized adaptive treatment regimes tailored to the individual's response. We assume in an imperfect world we only observe a blurred and noisy version of the underlying pathological/physiological changes via qMRI, due to measurement errors or unpredictable influences. We use a hidden Markov random field to model the spatial dependence in the data and develop a maximum likelihood approach via the Expectation--Maximization algorithm with stochastic variation. An important improvement over previous work is the assessment of variability in parameter estimation, which is the valid basis for statistical inference. More importantly, we focus on the expected changes rather than image segmentation. Our research has shown that the approach is powerful in both simulation studies and on a real dataset, while quite robust in the presence of some model assumption violations.Comment: Published in at http://dx.doi.org/10.1214/07-AOAS157 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Dynamic low-level context for the detection of mild traumatic brain injury.

    Get PDF
    Mild traumatic brain injury (mTBI) appears as low contrast lesions in magnetic resonance (MR) imaging. Standard automated detection approaches cannot detect the subtle changes caused by the lesions. The use of context has become integral for the detection of low contrast objects in images. Context is any information that can be used for object detection but is not directly due to the physical appearance of an object in an image. In this paper, new low-level static and dynamic context features are proposed and integrated into a discriminative voxel-level classifier to improve the detection of mTBI lesions. Visual features, including multiple texture measures, are used to give an initial estimate of a lesion. From the initial estimate novel proximity and directional distance, contextual features are calculated and used as features for another classifier. This feature takes advantage of spatial information given by the initial lesion estimate using only the visual features. Dynamic context is captured by the proposed posterior marginal edge distance context feature, which measures the distance from a hard estimate of the lesion at a previous time point. The approach is validated on a temporal mTBI rat model dataset and shown to have improved dice score and convergence compared to other state-of-the-art approaches. Analysis of feature importance and versatility of the approach on other datasets are also provided

    3D medical volume segmentation using hybrid multiresolution statistical approaches

    Get PDF
    This article is available through the Brunel Open Access Publishing Fund. Copyright © 2010 S AlZu’bi and A Amira.3D volume segmentation is the process of partitioning voxels into 3D regions (subvolumes) that represent meaningful physical entities which are more meaningful and easier to analyze and usable in future applications. Multiresolution Analysis (MRA) enables the preservation of an image according to certain levels of resolution or blurring. Because of multiresolution quality, wavelets have been deployed in image compression, denoising, and classification. This paper focuses on the implementation of efficient medical volume segmentation techniques. Multiresolution analysis including 3D wavelet and ridgelet has been used for feature extraction which can be modeled using Hidden Markov Models (HMMs) to segment the volume slices. A comparison study has been carried out to evaluate 2D and 3D techniques which reveals that 3D methodologies can accurately detect the Region Of Interest (ROI). Automatic segmentation has been achieved using HMMs where the ROI is detected accurately but suffers a long computation time for its calculations

    A non-invasive image based system for early diagnosis of prostate cancer.

    Get PDF
    Prostate cancer is the second most fatal cancer experienced by American males. The average American male has a 16.15% chance of developing prostate cancer, which is 8.38% higher than lung cancer, the second most likely cancer. The current in-vitro techniques that are based on analyzing a patients blood and urine have several limitations concerning their accuracy. In addition, the prostate Specific Antigen (PSA) blood-based test, has a high chance of false positive diagnosis, ranging from 28%-58%. Yet, biopsy remains the gold standard for the assessment of prostate cancer, but only as the last resort because of its invasive nature, high cost, and potential morbidity rates. The major limitation of the relatively small needle biopsy samples is the higher possibility of producing false positive diagnosis. Moreover, the visual inspection system (e.g., Gleason grading system) is not quantitative technique and different observers may classify a sample differently, leading to discrepancies in the diagnosis. As reported in the literature that the early detection of prostate cancer is a crucial step for decreasing prostate cancer related deaths. Thus, there is an urgent need for developing objective, non-invasive image based technology for early detection of prostate cancer. The objective of this dissertation is to develop a computer vision methodology, later translated into a clinically usable software tool, which can improve sensitivity and specificity of early prostate cancer diagnosis based on the well-known hypothesis that malignant tumors are will connected with the blood vessels than the benign tumors. Therefore, using either Diffusion Weighted Magnetic Resonance imaging (DW-MRI) or Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI), we will be able to interrelate the amount of blood in the detected prostate tumors by estimating either the Apparent Diffusion Coefficient (ADC) in the prostate with the malignancy of the prostate tumor or perfusion parameters. We intend to validate this hypothesis by demonstrating that automatic segmentation of the prostate from either DW-MRI or DCE-MRI after handling its local motion, provides discriminatory features for early prostate cancer diagnosis. The proposed CAD system consists of three majors components, the first two of which constitute new research contributions to a challenging computer vision problem. The three main components are: (1) A novel Shape-based segmentation approach to segment the prostate from either low contrast DW-MRI or DCE-MRI data; (2) A novel iso-contours-based non-rigid registration approach to ensure that we have voxel-on-voxel matches of all data which may be more difficult due to gross patient motion, transmitted respiratory effects, and intrinsic and transmitted pulsatile effects; and (3) Probabilistic models for the estimated diffusion and perfusion features for both malignant and benign tumors. Our results showed a 98% classification accuracy using Leave-One-Subject-Out (LOSO) approach based on the estimated ADC for 30 patients (12 patients diagnosed as malignant; 18 diagnosed as benign). These results show the promise of the proposed image-based diagnostic technique as a supplement to current technologies for diagnosing prostate cancer

    A framework for tumor segmentation and interactive immersive visualization of medical image data for surgical planning

    Get PDF
    This dissertation presents the framework for analyzing and visualizing digital medical images. Two new segmentation methods have been developed: a probability based segmentation algorithm, and a segmentation algorithm that uses a fuzzy rule based system to generate similarity values for segmentation. A visualization software application has also been developed to effectively view and manipulate digital medical images on a desktop computer as well as in an immersive environment.;For the probabilistic segmentation algorithm, image data are first enhanced by manually setting the appropriate window center and width, and if needed a sharpening or noise removal filter is applied. To initialize the segmentation process, a user places a seed point within the object of interest and defines a search region for segmentation. Based on the pixels\u27 spatial and intensity properties, a probabilistic selection criterion is used to extract pixels with a high probability of belonging to the object. To facilitate the segmentation of multiple slices, an automatic seed selection algorithm was developed to keep the seeds in the object as its shape and/or location changes between consecutive slices.;The second segmentation method, a new segmentation method using a fuzzy rule based system to segment tumors in a three-dimensional CT data was also developed. To initialize the segmentation process, the user selects a region of interest (ROI) within the tumor in the first image of the CT study set. Using the ROI\u27s spatial and intensity properties, fuzzy inputs are generated for use in the fuzzy rules inference system. Using a set of predefined fuzzy rules, the system generates a defuzzified output for every pixel in terms of similarity to the object. Pixels with the highest similarity values are selected as tumor. This process is automatically repeated for every subsequent slice in the CT set without further user input, as the segmented region from the previous slice is used as the ROI for the current slice. This creates a propagation of information from the previous slices, used to segment the current slice. The membership functions used during the fuzzification and defuzzification processes are adaptive to the changes in the size and pixel intensities of the current ROI. The proposed method is highly customizable to suit different needs of a user, requiring information from only a single two-dimensional image.;Segmentation results from both algorithms showed success in segmenting the tumor from seven of the ten CT datasets with less than 10% false positive errors and five test cases with less than 10% false negative errors. The consistency of the segmentation results statistics also showed a high repeatability factor, with low values of inter- and intra-user variability for both methods.;The visualization software developed is designed to load and display any DICOM/PACS compatible three-dimensional image data for visualization and interaction in an immersive virtual environment. The software uses the open-source libraries DCMTK: DICOM Toolkit for parsing of digital medical images, Coin3D and SimVoleon for scenegraph management and volume rendering, and VRJuggler for virtual reality display and interaction. A user can apply pseudo-coloring in real time with multiple interactive clipping planes to slice into the volume for an interior view. A windowing feature controls the tissue density ranges to display. A wireless gamepad controller as well as a simple and intuitive menu interface control user interactions. The software is highly scalable as it can be used on a single desktop computer to a cluster of computers for an immersive multi-projection virtual environment. By wearing a pair of stereo goggles, the surgeon is immersed within the model itself, thus providing a sense of realism as if the surgeon is inside the patient.;The tools developed in this framework are designed to improve patient care by fostering the widespread use of advanced visualization and computational intelligence in preoperative planning, surgical training, and diagnostic assistance. Future work includes further improvements to both segmentation methods with plans to incorporate the use of deformable models and level set techniques to include tumor shape features as part of the segmentation criteria. For the surgical planning components, additional controls and interactions with the simulated endoscopic camera and the ability to segment the colon or a selected region of the airway for a fixed-path navigation as a full virtual endoscopy tool will also be implemented. (Abstract shortened by UMI.

    Feasibility of automated 3-dimensional magnetic resonance imaging pancreas segmentation.

    Get PDF
    PurposeWith the advent of MR guided radiotherapy, internal organ motion can be imaged simultaneously during treatment. In this study, we evaluate the feasibility of pancreas MRI segmentation using state-of-the-art segmentation methods.Methods and materialT2 weighted HASTE and T1 weighted VIBE images were acquired on 3 patients and 2 healthy volunteers for a total of 12 imaging volumes. A novel dictionary learning (DL) method was used to segment the pancreas and compared to t mean-shift merging (MSM), distance regularized level set (DRLS), graph cuts (GC) and the segmentation results were compared to manual contours using Dice's index (DI), Hausdorff distance and shift of the-center-of-the-organ (SHIFT).ResultsAll VIBE images were successfully segmented by at least one of the auto-segmentation method with DI >0.83 and SHIFT ≤2 mm using the best automated segmentation method. The automated segmentation error of HASTE images was significantly greater. DL is statistically superior to the other methods in Dice's overlapping index. For the Hausdorff distance and SHIFT measurement, DRLS and DL performed slightly superior to the GC method, and substantially superior to MSM. DL required least human supervision and was faster to compute.ConclusionOur study demonstrated potential feasibility of automated segmentation of the pancreas on MRI images with minimal human supervision at the beginning of imaging acquisition. The achieved accuracy is promising for organ localization
    corecore