4 research outputs found

    Medical images modality classification using multi-scale dictionary learning

    Get PDF
    In this paper, we proposed a method for classification of medical images captured by different sensors (modalities) based on multi-scale wavelet representation using dictionary learning. Wavelet features extracted from an image provide discrimination useful for classification of medical images, namely, diffusion tensor imaging (DTI), magnetic resonance imaging (MRI), magnetic resonance angiography (MRA) and functional magnetic resonance imaging (FRMI). The ability of On-line dictionary learning (ODL) to achieve sparse representation of an image is exploited to develop dictionaries for each class using multi-scale representation (wavelets) feature. An experimental analysis performed on a set of images from the ICBM medical database demonstrates efficacy of the proposed method

    Towards the improvement of textual anatomy image classification using image local features

    Full text link

    Fusion Techniques in Biomedical Information Retrieval

    Get PDF
    For difficult cases clinicians usually use their experience and also the information found in textbooks to determine a diagnosis. Computer tools can help them supply the relevant information now that much medical knowledge is available in digital form. A biomedical search system such as developed in the Khresmoi project (that this chapter partially reuses) has the goal to fulfil information needs of physicians. This chapter concentrates on information needs for medical cases that contain a large variety of data, from free text, structured data to images. Fusion techniques will be compared to combine the various information sources to supply cases similar to an example case given. This can supply physicians with answers to problems similar to the one they are analyzing and can help in diagnosis and treatment planning

    Classification of Medical Data Based On Sparse Representation Using Dictionary Learning

    Get PDF
    Due to the increase in the sources of image acquisition and storage capacity, the search for relevant information in large medical image databases has become more challenging. Classification of medical data into different categories is an important task, and enables efficient cataloging and retrieval with large image collections. The medical image classification systems available today classify medical images based on modality, body part, disease or orientation. Recent work in this direction seek to use the semantics of medical data to achieve better classification. However, representation of semantics is a challenging task and sparse representation has been explored in this thesis for this task
    corecore