117 research outputs found

    Brain tumor classification using the diffusion tensor image segmentation (D-SEG) technique.

    Get PDF
    BACKGROUND: There is an increasing demand for noninvasive brain tumor biomarkers to guide surgery and subsequent oncotherapy. We present a novel whole-brain diffusion tensor imaging (DTI) segmentation (D-SEG) to delineate tumor volumes of interest (VOIs) for subsequent classification of tumor type. D-SEG uses isotropic (p) and anisotropic (q) components of the diffusion tensor to segment regions with similar diffusion characteristics. METHODS: DTI scans were acquired from 95 patients with low- and high-grade glioma, metastases, and meningioma and from 29 healthy subjects. D-SEG uses k-means clustering of the 2D (p,q) space to generate segments with different isotropic and anisotropic diffusion characteristics. RESULTS: Our results are visualized using a novel RGB color scheme incorporating p, q and T2-weighted information within each segment. The volumetric contribution of each segment to gray matter, white matter, and cerebrospinal fluid spaces was used to generate healthy tissue D-SEG spectra. Tumor VOIs were extracted using a semiautomated flood-filling technique and D-SEG spectra were computed within the VOI. Classification of tumor type using D-SEG spectra was performed using support vector machines. D-SEG was computationally fast and stable and delineated regions of healthy tissue from tumor and edema. D-SEG spectra were consistent for each tumor type, with constituent diffusion characteristics potentially reflecting regional differences in tissue microstructure. Support vector machines classified tumor type with an overall accuracy of 94.7%, providing better classification than previously reported. CONCLUSIONS: D-SEG presents a user-friendly, semiautomated biomarker that may provide a valuable adjunct in noninvasive brain tumor diagnosis and treatment planning

    Automatic Interpretation of Melanocytic Images in Confocal Laser Scanning Microscopy

    Get PDF
    The frequency of melanoma doubles every 20 years. The early detection of malignant changes augments the therapy success. Confocal laser scanning microscopy (CLSM) enables the noninvasive examination of skin tissue. To diminish the need for training and to improve diagnostic accuracy, computer-aided diagnostic systems are required. Two approaches are presented: a multiresolution analysis and an approach based on deep layer convolutional neural networks. For the diagnosis of the CLSM views, architectural structures such as micro-anatomic structures and cell nests are used as guidelines by the dermatologists. Features based on the wavelet transform enable an exploration of architectural structures at different spatial scales. The subjective diagnostic criteria are objectively reproduced. A tree-based machine-learning algorithm captures the decision structure explicitly and the decision steps are used as diagnostic rules. Deep layer neural networks require no a priori domain knowledge. They are capable of learning their own discriminatory features through the direct analysis of image data. However, deep layer neural networks require large amounts of processing power to learn. Therefore, modern neural network training is performed using graphics cards, which typically possess many hundreds of small, modestly powerful cores that calculate massively in parallel. Readers will learn how to apply multiresolution analysis and modern deep learning neural network techniques to medical image analysis problems

    A review of the quantification and classification of pigmented skin lesions: from dedicated to hand-held devices

    Get PDF
    In recent years, the incidence of skin cancer caseshas risen, worldwide, mainly due to the prolonged exposure toharmful ultraviolet radiation. Concurrently, the computerassistedmedical diagnosis of skin cancer has undergone majoradvances, through an improvement in the instrument and detectiontechnology, and the development of algorithms to processthe information. Moreover, because there has been anincreased need to store medical data, for monitoring, comparativeand assisted-learning purposes, algorithms for data processingand storage have also become more efficient in handlingthe increase of data. In addition, the potential use ofcommon mobile devices to register high-resolution imagesof skin lesions has also fueled the need to create real-timeprocessing algorithms that may provide a likelihood for thedevelopment of malignancy. This last possibility allows evennon-specialists to monitor and follow-up suspected skin cancercases. In this review, we present the major steps in the preprocessing,processing and post-processing of skin lesion images,with a particular emphasis on the quantification andclassification of pigmented skin lesions. We further reviewand outline the future challenges for the creation of minimum-feature,automated and real-time algorithms for the detectionof skin cancer from images acquired via common mobiledevices

    Magnetic Resonance Imaging of Gliomas

    Get PDF
    Open Access.This work was supported in part by grants CTQ2010-20960-C02-02 to P.L.L. and grant SAF2008-01327 to S.C. A.M.M. held an Erasmus Fellowship from Coimbra University and E.C.C. a predoctoral CSIC contract.Peer Reviewe

    A Review on Data Fusion of Multidimensional Medical and Biomedical Data

    Get PDF
    Data fusion aims to provide a more accurate description of a sample than any one source of data alone. At the same time, data fusion minimizes the uncertainty of the results by combining data from multiple sources. Both aim to improve the characterization of samples and might improve clinical diagnosis and prognosis. In this paper, we present an overview of the advances achieved over the last decades in data fusion approaches in the context of the medical and biomedical fields. We collected approaches for interpreting multiple sources of data in different combinations: image to image, image to biomarker, spectra to image, spectra to spectra, spectra to biomarker, and others. We found that the most prevalent combination is the image-to-image fusion and that most data fusion approaches were applied together with deep learning or machine learning methods

    Ultrasound tissue perfusion imaging

    Get PDF
    Enhanced blood perfusion in a tissue mass is an indication of neo-vascularity and potential malignancy. Ultrasonic pulsed Doppler imaging is a safe and economical modality for noninvasive monitoring of blood flow. However, weak blood echoes make it difficult to detect perfusion using standard methods without the expense of contrast enhancement. Additionally, imaging requires high sensitivity to slow, disorganized blood-flow patterns while simultaneously rejecting clutter and noise. An approach to address these challenges involves arranging acquisition data in a multi-dimensional structure to facilitate the characterization and separation of independent scattering sources. The resulting data array involves a linear combination of spatial, slow-time (kHz-order sampling), and frame-time (Hz-order sampling) coordinates. Applying an eigenfilter that exploits higher-order singular value decomposition (HOSVD) can technically transform the array and reduce the dimensions to yield power estimates for blood flow and perfusion that are well isolated from tissue clutter. Studies using microcirculation-mimicking simulations and phantoms enable the optimization of the filtering algorithm to maximize estimation efficiency. These techniques are applied to murine models of ischemia and melanoma at 24 MHz to form perfusion images. The results show enhancements of tissue perfusion maps, which help researchers access lesions without contrast enhancement. In a study aimed at peripheral artery disease (PAD), the enhanced sensitivity and specificity of ultrasonic-pulsed-Doppler imaging enable differentiation of perfusion between healthy and ischemic states. In addition, the use of the new ultrasound imaging coupled with other imaging modalities helps to illuminate the complex mechanism that mediates neovascularization in response to vascular occlusion. Consequently, these techniques have the potential to increase the effectiveness of existing medical imaging technologies in safe, cost-effective ways that promote sustainable medicine

    Multi Modality Brain Mapping System (MBMS) Using Artificial Intelligence and Pattern Recognition

    Get PDF
    A Multimodality Brain Mapping System (MBMS), comprising one or more scopes (e.g., microscopes or endoscopes) coupled to one or more processors, wherein the one or more processors obtain training data from one or more first images and/or first data, wherein one or more abnormal regions and one or more normal regions are identified; receive a second image captured by one or more of the scopes at a later time than the one or more first images and/or first data and/or captured using a different imaging technique; and generate, using machine learning trained using the training data, one or more viewable indicators identifying one or abnormalities in the second image, wherein the one or more viewable indicators are generated in real time as the second image is formed. One or more of the scopes display the one or more viewable indicators on the second image

    Advanced Computational Methods for Oncological Image Analysis

    Get PDF
    [Cancer is the second most common cause of death worldwide and encompasses highly variable clinical and biological scenarios. Some of the current clinical challenges are (i) early diagnosis of the disease and (ii) precision medicine, which allows for treatments targeted to specific clinical cases. The ultimate goal is to optimize the clinical workflow by combining accurate diagnosis with the most suitable therapies. Toward this, large-scale machine learning research can define associations among clinical, imaging, and multi-omics studies, making it possible to provide reliable diagnostic and prognostic biomarkers for precision oncology. Such reliable computer-assisted methods (i.e., artificial intelligence) together with clinicians’ unique knowledge can be used to properly handle typical issues in evaluation/quantification procedures (i.e., operator dependence and time-consuming tasks). These technical advances can significantly improve result repeatability in disease diagnosis and guide toward appropriate cancer care. Indeed, the need to apply machine learning and computational intelligence techniques has steadily increased to effectively perform image processing operations—such as segmentation, co-registration, classification, and dimensionality reduction—and multi-omics data integration.
    corecore