8,651 research outputs found

    Identification of body fat tissues in MRI data

    Get PDF
    In recent years non-invasive medical diagnostic techniques have been used widely in medical investigations. Among the various imaging modalities available, Magnetic Resonance Imaging is very attractive as it produces multi-slice images where the contrast between various types of body tissues such as muscle, ligaments and fat is well defined. The aim of this paper is to describe the implementation of an unsupervised image analysis algorithm able to identify the body fat tissues from a sequence of MR images encoded in DICOM format. The developed algorithm consists of three main steps. The first step pre-processes the MR images in order to reduce the level of noise. The second step extracts the image areas representing fat tissues by using an unsupervised clustering algorithm. Finally, image refinements are applied to reclassify the pixels adjacent to the initial fat estimate and to eliminate outliers. The experimental data indicates that the proposed implementation returns accurate results and furthermore is robust to noise and to greyscale in-homogeneity

    Adaptive Nonparametric Image Parsing

    Get PDF
    In this paper, we present an adaptive nonparametric solution to the image parsing task, namely annotating each image pixel with its corresponding category label. For a given test image, first, a locality-aware retrieval set is extracted from the training data based on super-pixel matching similarities, which are augmented with feature extraction for better differentiation of local super-pixels. Then, the category of each super-pixel is initialized by the majority vote of the kk-nearest-neighbor super-pixels in the retrieval set. Instead of fixing kk as in traditional non-parametric approaches, here we propose a novel adaptive nonparametric approach which determines the sample-specific k for each test image. In particular, kk is adaptively set to be the number of the fewest nearest super-pixels which the images in the retrieval set can use to get the best category prediction. Finally, the initial super-pixel labels are further refined by contextual smoothing. Extensive experiments on challenging datasets demonstrate the superiority of the new solution over other state-of-the-art nonparametric solutions.Comment: 11 page

    Automatic segmentation of the left ventricle cavity and myocardium in MRI data

    Get PDF
    A novel approach for the automatic segmentation has been developed to extract the epi-cardium and endo-cardium boundaries of the left ventricle (lv) of the heart. The developed segmentation scheme takes multi-slice and multi-phase magnetic resonance (MR) images of the heart, transversing the short-axis length from the base to the apex. Each image is taken at one instance in the heart's phase. The images are segmented using a diffusion-based filter followed by an unsupervised clustering technique and the resulting labels are checked to locate the (lv) cavity. From cardiac anatomy, the closest pool of blood to the lv cavity is the right ventricle cavity. The wall between these two blood-pools (interventricular septum) is measured to give an approximate thickness for the myocardium. This value is used when a radial search is performed on a gradient image to find appropriate robust segments of the epi-cardium boundary. The robust edge segments are then joined using a normal spline curve. Experimental results are presented with very encouraging qualitative and quantitative results and a comparison is made against the state-of-the art level-sets method

    Spatio-Temporal Modelling of Perfusion Cardiovascular MRI

    Get PDF
    Myocardial perfusion MRI provides valuable insight into how coronary artery and microvascular diseases affect myocardial tissue. Stenosis in a coronary vessel leads to reduced maximum blood flow (MBF), but collaterals may secure the blood supply of the myocardium but with altered tracer kinetics. To date, quantitative analysis of myocardial perfusion MRI has only been performed on a local level, largely ignoring the contextual information inherent in different myocardial segments. This paper proposes to quantify the spatial dependencies between the local kinetics via a Hierarchical Bayesian Model (HBM). In the proposed framework, all local systems are modelled simultaneously along with their dependencies, thus allowing more robust context-driven estimation of local kinetics. Detailed validation on both simulated and patient data is provided

    MRI image segmantation based on edge detection

    Get PDF
    CĂ­lem tĂ©to prĂĄce je pƙedstavit zĂĄkladnĂ­ segmentačnĂ­ techniky pouĆŸĂ­vĂĄnĂ© v oblasti medicĂ­nskĂ©ho zpracovĂĄnĂ­ obrazovĂœch dat a pomocĂ­ 3D prohlĂ­ĆŸeče schopnĂ©ho zobrazit 3D obrazy implementovat segmentačnĂ­ modul zaloĆŸenĂœ na hranovĂ© detekci a vyhodnotit vĂœsledky. NavrhovanĂœ prohlĂ­ĆŸeč je sestavenĂœ v prostƙedi Matlab GUI a je schopen načíst objem 3D snĂ­mkĆŻ pƙedstavujĂ­cĂ­ lidskou hlavu. NavrhovanĂœ segmentačnĂ­ modul je zaloĆŸen na pouĆŸitĂ­ hranovĂœch detektorĆŻ, zejmĂ©na Cannyho detektoru.The aim of this thesis is to present the basic segmentation techniques uses in the field of medical image processing and by using a 3D viewer able to visualize 3D images, implement a segmentation module based on edges detection and evaluate the results. The proposed viewer is a 3D viewer build using matlab GUI and is able to load a volume of images representing the human head. The proposed segmentation module is based on the use of edge detectors particularly the Canny algorithm.

    Visual and Contextual Modeling for the Detection of Repeated Mild Traumatic Brain Injury.

    Get PDF
    Currently, there is a lack of computational methods for the evaluation of mild traumatic brain injury (mTBI) from magnetic resonance imaging (MRI). Further, the development of automated analyses has been hindered by the subtle nature of mTBI abnormalities, which appear as low contrast MR regions. This paper proposes an approach that is able to detect mTBI lesions by combining both the high-level context and low-level visual information. The contextual model estimates the progression of the disease using subject information, such as the time since injury and the knowledge about the location of mTBI. The visual model utilizes texture features in MRI along with a probabilistic support vector machine to maximize the discrimination in unimodal MR images. These two models are fused to obtain a final estimate of the locations of the mTBI lesion. The models are tested using a novel rodent model of repeated mTBI dataset. The experimental results demonstrate that the fusion of both contextual and visual textural features outperforms other state-of-the-art approaches. Clinically, our approach has the potential to benefit both clinicians by speeding diagnosis and patients by improving clinical care

    Semantic-Aware Image Analysis

    Get PDF
    Extracting and utilizing high-level semantic information from images is one of the important goals of computer vision. The ultimate objective of image analysis is to be able to understand each pixel of an image with regard to high-level semantics, e.g. the objects, the stuff, and their spatial, functional and semantic relations. In recent years, thanks to large labeled datasets and deep learning, great progress has been made to solve image analysis problems, such as image classification, object detection, and object pose estimation. In this work, we explore several aspects of semantic-aware image analysis. First, we explore semantic segmentation of man-made scenes using fully connected conditional random fields which can model long-range connections within the image of man-made scenes and make use of contextual information of scene structures. Second, we introduce a semantic smoothing method by exploiting the semantic information to accomplish semantic structure-preserving image smoothing. Semantic segmentation has achieved significant progress recently and has been widely used in many computer vision tasks. We observe that high-level semantic image labeling information can provide a meaningful structure prior to image smoothing naturally. Third, we present a deep object co-segmentation approach for segmenting common objects of the same class within a pair of images. To address this task, we propose a CNN-based Siamese encoder-decoder architecture. The encoder extracts high-level semantic features of the foreground objects, a mutual correlation layer detects the common objects, and finally, the decoder generates the output foreground masks for each image. Finally, we propose an approach to localize common objects from novel object categories in a set of images. We solve this problem using a new common component activation map in which we treat the class-specific activation maps as components to discover the common components in the image set. We show that our approach can generalize on novel object categories in our experiments

    Robust semi-automated path extraction for visualising stenosis of the coronary arteries

    Get PDF
    Computed tomography angiography (CTA) is useful for diagnosing and planning treatment of heart disease. However, contrast agent in surrounding structures (such as the aorta and left ventricle) makes 3-D visualisation of the coronary arteries difficult. This paper presents a composite method employing segmentation and volume rendering to overcome this issue. A key contribution is a novel Fast Marching minimal path cost function for vessel centreline extraction. The resultant centreline is used to compute a measure of vessel lumen, which indicates the degree of stenosis (narrowing of a vessel). Two volume visualisation techniques are presented which utilise the segmented arteries and lumen measure. The system is evaluated and demonstrated using synthetic and clinically obtained datasets

    Semantic Object Parsing with Graph LSTM

    Full text link
    By taking the semantic object parsing task as an exemplar application scenario, we propose the Graph Long Short-Term Memory (Graph LSTM) network, which is the generalization of LSTM from sequential data or multi-dimensional data to general graph-structured data. Particularly, instead of evenly and fixedly dividing an image to pixels or patches in existing multi-dimensional LSTM structures (e.g., Row, Grid and Diagonal LSTMs), we take each arbitrary-shaped superpixel as a semantically consistent node, and adaptively construct an undirected graph for each image, where the spatial relations of the superpixels are naturally used as edges. Constructed on such an adaptive graph topology, the Graph LSTM is more naturally aligned with the visual patterns in the image (e.g., object boundaries or appearance similarities) and provides a more economical information propagation route. Furthermore, for each optimization step over Graph LSTM, we propose to use a confidence-driven scheme to update the hidden and memory states of nodes progressively till all nodes are updated. In addition, for each node, the forgets gates are adaptively learned to capture different degrees of semantic correlation with neighboring nodes. Comprehensive evaluations on four diverse semantic object parsing datasets well demonstrate the significant superiority of our Graph LSTM over other state-of-the-art solutions.Comment: 18 page
    • 

    corecore