23 research outputs found
Recommended from our members
Interactive Segmentation in Multimodal Medical Imagery Using a Bayesian Transductive Learning Approach
Labeled training data in the medical domain is rare and expensive to obtain. The lack of labeled multimodal medical image data is a major obstacle for devising learning-based interactive segmentation tools. Transductive learning (TL) or semi-supervised learning (SSL) offers a workaround by leveraging unlabeled and labeled data to infer labels for the test set given a small portion of label information. In this paper we propose a novel algorithm for interactive segmentation using transductive learning and inference in conditional mixture nave Bayes models (T-CMNB) with spatial regularization constraints. T-CMNB is an extension of the transductive nave Bayes algorithm [1, 20]. The multimodal Gaussian mixture assumption on the class-conditional likelihood and spatial regularization constraints allow us to explain more complex distributions required for spatial classification in multimodal imagery. To simplify the estimation we reduce the parameter space by assuming nave conditional independence between the feature space and the class label. The nave conditional independence assumption allows efficient inference of marginal and conditional distributions for large scale learning and inference [19]. We evaluate the proposed algorithm on multimodal MRI brain imagery using ROC statistics and provide preliminary results. The algorithm shows promising segmentation performance with a sensitivity and specificity of 90.37% and 99.74% respectively and compares competitively to alternative interactive segmentation schemes
Recommended from our members
Bayesian transduction and Markov conditional mixtures for spatiotemporal interactive segmentation
In this paper we propose a novel transductive learning machine for spatiotemporal classification casted as an interactive segmentation problem. We present Markov conditional mixtures of naive Bayes models with spatiotemporal regularization constraints in a transductive learning and inference framework. The proposed model extends on previous work to account for non independent and identically distributed (i.i.d.) sequential data by imposing the learning and inference problem w.r.t. time. The multimodal mixture assumption on the class-conditional likelihood for each covariate feature domain in conjunction with spatiotemporal regularization constraints allow us to explain more complex distributions required for classification in multimodal longitudinal brain imagery. We evaluate the proposed algorithm on multimodal temporal MRI brain images using ROC statistics and report preliminary results
Recommended from our members
Concept detection in longitudinal brain MR images using multi-modal cues
Advances in medical imaging techniques and devices has resulted in increased use of imaging in monitoring disease progression in patients. However, extracting decision-enabling information from the resulting longitudinal multi-modal image sets poses a challenge. Radiologists often have to manually identify and quantify certain regions of interest in the longitudinal image sets, which bear upon the patient's condition. As the number of patients increases, the number of longitudinal multi-modal images grows, and the manual annotation and quantification of pathological concepts quickly becomes impractical. In this paper we explore how minimal annotations provided by the user at a few time points can be effectively leveraged to automatically annotate data in the entire multi-modal longitudinal image sets. In particular, we investigate the required number of annotated images per time point and across time for obtaining reasonable results for the entire image set, and what multi-modal cues can help boost the overall annotation results
ECHOCARDIOGRAM VIDEOS: SUMMARIZATION, TEMPORAL SEGMENTATION AND BROWSING
In this paper, we present a system for the temporal segmentation, summarization, and browsing of the Echocardiogram videos. Echocardiogram videos are video sequences produced by the ultrasound scanning of the heart, and are one of the main modalities of imaging the heart structure. Our approach combines the domain-specific knowledge and the automatic analysis of the spatio-temporal structure of the Echocardiogram videos. The videos are temporally sampled using the embedded Electrocardiogram graph. The consecutive sampled frames are compared based on the shape of the Region Of Interest and the presence/absence of color to detect the boundaries between the different segments. The content of each segment of the video is summarized into two forms: the static and the dynamic summaries. Finally the summary is displayed in the user interface in an intuitive form for the purpose of browsing. Applications include digital medical image libraries, medical image management, and tele-medicine. 1
Visual Event Detection Using Multi-Dimensional Concept Dynamics
A novel framework is introduced for visual event detection. Visual events are viewed as stochastic temporal processes in the semantic concept space. In this concept-centered approach to visual event modeling, the dynamic pattern of an event is modeled through the collective evolution patterns of the individual semantic concepts in the course of the visual event. Video clips containing different events are classified by employing information about how well their dynamics in the direction of each semantic concept matches those of a given event. Results indicate that such a data-driven statistical approach is in fact effective in detecting different visual events such as exiting car, riot, and airplane flying