173 research outputs found

    Volumetric analysis of plexiform neurofibroma for patients treated with trametinib

    Full text link
    Des recherches récentes suggèrent que le trametinib (inhibiteur de MEK) peut traiter les neurofibrome plexiforme (NP) provoquées par la neurofibromatose de type 1 (NF1). Les NP peuvent apparaître n'importe où dans le corps près des nerfs. Ces tumeurs se distinguent par leur forme inhabituelle et leur morphologie irrégulière qui les rendent difficile à mesurer. Pour évaluer l'efficacité du trametinib dans le traitement des NP, nous suggérons une analyse volumétrique (mesure 3D) plutôt que des mesures 1D et 2D (habituelles) basées sur l'imagerie par résonance magnétique (IRM). Pour cette étude, des examens IRM ont été réalisés à des intervalles d'environ trois mois pour trente-quatre patients atteints de NP. J’ai développé une méthode semi-automatique pour segmenter les PN sur les images IRM. J’ai testé et validé notre nouvelle approche et soumis un manuscrit incluant la description de la nouvelle méthodologie et les résultats de segmentation pour publication dans l'American Journal of Neuroradiology (AJNR). J’ai mis en place un outil pratique pour estimer avec précision le volume tumoral en utilisant cette méthode de segmentation. En conséquence, le suivi des changements tout au long du traitement devient possible et fiable. L'analyse volumétrique réalisée chez 34 participants recrutés durant l’essai clinique révèle que le trametinib a entrainer une diminution du volume médian de la lésion initiale d'environ 20 % pour la période de 18 mois de traitement.Recent research suggests that the medication trametinib can treat plexiform neurofibroma (PN) lesions associated with neurofibromatosis type 1 (NF1) disease. PNs can appear anywhere in the body near nerves. These tumors are distinct by their unusual shape and irregular morphology, which is difficult to assess. For evaluating trametinib's effectiveness in treating PN, we suggest a volumetric analysis (3D measurement) rather than 1D and 2D measures (typical) based on magnetic resonance imaging (MRI). For this study, MRI scans were performed at about three-month intervals for thirty-four patients with PN. I developed a semi-automatic method to segment PNs on MRI images. I tested and validated our new approach and submitted a manuscript with the description of the novel methodology and findings for publication in the American Journal of Neuroradiology (AJNR). I implemented a practical tool for accurately estimating tumor volume using this segmentation method. As a result, tracking lesion changes throughout the course of therapy becomes available. The volumetric analysis performed on 34 patients enrolled in the clinical trial reveals that trametinib decreased the initial median lesion volume by around 20% for the period of 18 months of treatment

    Mobile Wound Assessment and 3D Modeling from a Single Image

    Get PDF
    The prevalence of camera-enabled mobile phones have made mobile wound assessment a viable treatment option for millions of previously difficult to reach patients. We have designed a complete mobile wound assessment platform to ameliorate the many challenges related to chronic wound care. Chronic wounds and infections are the most severe, costly and fatal types of wounds, placing them at the center of mobile wound assessment. Wound physicians assess thousands of single-view wound images from all over the world, and it may be difficult to determine the location of the wound on the body, for example, if the wound is taken at close range. In our solution, end-users capture an image of the wound by taking a picture with their mobile camera. The wound image is segmented and classified using modern convolution neural networks, and is stored securely in the cloud for remote tracking. We use an interactive semi-automated approach to allow users to specify the location of the wound on the body. To accomplish this we have created, to the best our knowledge, the first 3D human surface anatomy labeling system, based off the current NYU and Anatomy Mapper labeling systems. To interactively view wounds in 3D, we have presented an efficient projective texture mapping algorithm for texturing wounds onto a 3D human anatomy model. In so doing, we have demonstrated an approach to 3D wound reconstruction that works even for a single wound image

    Cloud-Based Benchmarking of Medical Image Analysis

    Get PDF
    Medical imagin

    The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)

    Get PDF
    In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low-and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource

    Combining Shape and Learning for Medical Image Analysis

    Get PDF
    Automatic methods with the ability to make accurate, fast and robust assessments of medical images are highly requested in medical research and clinical care. Excellent automatic algorithms are characterized by speed, allowing for scalability, and an accuracy comparable to an expert radiologist. They should produce morphologically and physiologically plausible results while generalizing well to unseen and rare anatomies. Still, there are few, if any, applications where today\u27s automatic methods succeed to meet these requirements.\ua0The focus of this thesis is two tasks essential for enabling automatic medical image assessment, medical image segmentation and medical image registration. Medical image registration, i.e. aligning two separate medical images, is used as an important sub-routine in many image analysis tools as well as in image fusion, disease progress tracking and population statistics. Medical image segmentation, i.e. delineating anatomically or physiologically meaningful boundaries, is used for both diagnostic and visualization purposes in a wide range of applications, e.g. in computer-aided diagnosis and surgery.The thesis comprises five papers addressing medical image registration and/or segmentation for a diverse set of applications and modalities, i.e. pericardium segmentation in cardiac CTA, brain region parcellation in MRI, multi-organ segmentation in CT, heart ventricle segmentation in cardiac ultrasound and tau PET registration. The five papers propose competitive registration and segmentation methods enabled by machine learning techniques, e.g. random decision forests and convolutional neural networks, as well as by shape modelling, e.g. multi-atlas segmentation and conditional random fields

    Image Enhancement and Segmentation Techniques for Detection of Knee Joint Diseases: A Survey

    Get PDF
    Knee bone diseases are rare but might be highly destructive. Magnetic resonance imaging (MRI) is the main approach to identify knee cancer and its treatment. Normally, the knee cancers are pointed out with the help of different MRI analysis techniques and latter image analysis strategies understand these images. Computer-based medical image analysis is getting researcher's interest due to its advantages of speed and accuracy as compared to traditional techniques. The focus of current research is MRI-based medical image analysis for knee bone disease detection. Accordingly, several approaches for features extraction and segmentation for knee bone cancer are analyzed and compared on benchmark database. Finally, the current state of the art is investigated and future directions are proposed

    Discovering a Domain Knowledge Representation for Image Grouping: Multimodal Data Modeling, Fusion, and Interactive Learning

    Get PDF
    In visually-oriented specialized medical domains such as dermatology and radiology, physicians explore interesting image cases from medical image repositories for comparative case studies to aid clinical diagnoses, educate medical trainees, and support medical research. However, general image classification and retrieval approaches fail in grouping medical images from the physicians\u27 viewpoint. This is because fully-automated learning techniques cannot yet bridge the gap between image features and domain-specific content for the absence of expert knowledge. Understanding how experts get information from medical images is therefore an important research topic. As a prior study, we conducted data elicitation experiments, where physicians were instructed to inspect each medical image towards a diagnosis while describing image content to a student seated nearby. Experts\u27 eye movements and their verbal descriptions of the image content were recorded to capture various aspects of expert image understanding. This dissertation aims at an intuitive approach to extracting expert knowledge, which is to find patterns in expert data elicited from image-based diagnoses. These patterns are useful to understand both the characteristics of the medical images and the experts\u27 cognitive reasoning processes. The transformation from the viewed raw image features to interpretation as domain-specific concepts requires experts\u27 domain knowledge and cognitive reasoning. This dissertation also approximates this transformation using a matrix factorization-based framework, which helps project multiple expert-derived data modalities to high-level abstractions. To combine additional expert interventions with computational processing capabilities, an interactive machine learning paradigm is developed to treat experts as an integral part of the learning process. Specifically, experts refine medical image groups presented by the learned model locally, to incrementally re-learn the model globally. This paradigm avoids the onerous expert annotations for model training, while aligning the learned model with experts\u27 sense-making
    • …
    corecore