661 research outputs found

    Computer-aided detection of metastatic brain tumors using automated three-dimensional template matching

    Full text link
    Purpose: To demonstrate the efficacy of an automated three-dimensional (3D) template matching-based algorithm in detecting brain metastases on conventional MR scans and the potential of our algorithm to be developed into a computer-aided detection tool that will allow radiologists to maintain a high level of detection sensitivity while reducing image reading time. Materials and Methods: Spherical tumor appearance models were created to match the expected geometry of brain metastases while accounting for partial volume effects and offsets due to the cut of MRI sampling planes. A 3D normalized cross-correlation coefficient was calculated between the brain volume and spherical templates of varying radii using a fast frequency domain algorithm to identify likely positions of brain metastases. Results: Algorithm parameters were optimized on training datasets, and then data were collected on 22 patient datasets containing 79 total brain metastases producing a sensitivity of 89.9% with a false positive rate of 0.22 per image slice when restricted to the brain mass. Conclusion: Study results demonstrate that the 3D template matching-based method can be an effective, fast, and accurate approach that could serve as a useful tool for assisting radiologists in providing earlier and more definitive diagnoses of metastases within the brain. J. Magn. Reson. Imaging 2010;31:85–93. © 2009 Wiley-Liss, Inc.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/64548/1/22009_ftp.pd

    기계학습 알고리즘을 이용한 자기공명영상 검사에서의 뇌전이암 컴퓨터 보조진단 시스템 개발

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 의과대학 의학과, 2018. 2. 손철호.Purpose: To assess the effect of computer-aided detection (CAD) of brain metastasis (BM) on radiologists diagnostic performance in interpreting three-dimensional brain magnetic resonance (MR) imaging using follow-up imaging and consensus as the reference standard. Materials and Methods: The institutional review board approved this retrospective study. The study cohort consisted of 110 consecutive patients with BM and 30 patients without BM. The training data set included MR images of 80 patients with 450 BM nodules. The test set included MR images of 30 patients with 134 BM nodules and 30 patients without BM. We developed a CAD system for BM detection using template-matching and K-means clustering algorithms for candidate detection and an artificial neural network for false-positive reduction. Four reviewers (two neuroradiologists and two radiology residents) interpreted the test set images before and after the use of CAD in a sequential manner. The sensitivity, false positive (FP) per case, and reading time were analyzed. A jackknife free-response receiver operating characteristic (JAFROC) method was used to determine the improvement in the diagnostic accuracy. Results: The sensitivity of CAD was 87.3% with an FP per case of 302.4. CAD significantly improved the diagnostic performance of the four reviewers with a figure-of-merit (FOM) of 0.874 (without CAD) vs. 0.898 (with CAD) according to JAFROC analysis (p < 0.01). Statistically significant improvement was noted only for less-experienced reviewers (FOM without vs. with CAD, 0.834 vs. 0.877, p < 0.01). The additional time required to review the CAD results was approximately 72 sec (40% of the total review time). Conclusion: CAD as a second reader helps radiologists improve their diagnostic performance in the detection of BM on MR imaging, particularly for less-experienced reviewers.Introduction 1 Materials and Methods 3 Results 21 Discussion 36 Conclusions 41 References 42 Abstract in Korean 50Docto

    Advanced Algorithms for 3D Medical Image Data Fusion in Specific Medical Problems

    Get PDF
    Fúze obrazu je dnes jednou z nejběžnějších avšak stále velmi diskutovanou oblastí v lékařském zobrazování a hraje důležitou roli ve všech oblastech lékařské péče jako je diagnóza, léčba a chirurgie. V této dizertační práci jsou představeny tři projekty, které jsou velmi úzce spojeny s oblastí fúze medicínských dat. První projekt pojednává o 3D CT subtrakční angiografii dolních končetin. V práci je využito kombinace kontrastních a nekontrastních dat pro získání kompletního cévního stromu. Druhý projekt se zabývá fúzí DTI a T1 váhovaných MRI dat mozku. Cílem tohoto projektu je zkombinovat stukturální a funkční informace, které umožňují zlepšit znalosti konektivity v mozkové tkáni. Třetí projekt se zabývá metastázemi v CT časových datech páteře. Tento projekt je zaměřen na studium vývoje metastáz uvnitř obratlů ve fúzované časové řadě snímků. Tato dizertační práce představuje novou metodologii pro klasifikaci těchto metastáz. Všechny projekty zmíněné v této dizertační práci byly řešeny v rámci pracovní skupiny zabývající se analýzou lékařských dat, kterou vedl pan Prof. Jiří Jan. Tato dizertační práce obsahuje registrační část prvního a klasifikační část třetího projektu. Druhý projekt je představen kompletně. Další část prvního a třetího projektu, obsahující specifické předzpracování dat, jsou obsaženy v disertační práci mého kolegy Ing. Romana Petera.Image fusion is one of today´s most common and still challenging tasks in medical imaging and it plays crucial role in all areas of medical care such as diagnosis, treatment and surgery. Three projects crucially dependent on image fusion are introduced in this thesis. The first project deals with the 3D CT subtraction angiography of lower limbs. It combines pre-contrast and contrast enhanced data to extract the blood vessel tree. The second project fuses the DTI and T1-weighted MRI brain data. The aim of this project is to combine the brain structural and functional information that purvey improved knowledge about intrinsic brain connectivity. The third project deals with the time series of CT spine data where the metastases occur. In this project the progression of metastases within the vertebrae is studied based on fusion of the successive elements of the image series. This thesis introduces new methodology of classifying metastatic tissue. All the projects mentioned in this thesis have been solved by the medical image analysis group led by Prof. Jiří Jan. This dissertation concerns primarily the registration part of the first project and the classification part of the third project. The second project is described completely. The other parts of the first and third project, including the specific preprocessing of the data, are introduced in detail in the dissertation thesis of my colleague Roman Peter, M.Sc.

    Deep learning for brain metastasis detection and segmentation in longitudinal MRI data

    Full text link
    Brain metastases occur frequently in patients with metastatic cancer. Early and accurate detection of brain metastases is very essential for treatment planning and prognosis in radiation therapy. To improve brain metastasis detection performance with deep learning, a custom detection loss called volume-level sensitivity-specificity (VSS) is proposed, which rates individual metastasis detection sensitivity and specificity in (sub-)volume levels. As sensitivity and precision are always a trade-off in a metastasis level, either a high sensitivity or a high precision can be achieved by adjusting the weights in the VSS loss without decline in dice score coefficient for segmented metastases. To reduce metastasis-like structures being detected as false positive metastases, a temporal prior volume is proposed as an additional input of DeepMedic. The modified network is called DeepMedic+ for distinction. Our proposed VSS loss improves the sensitivity of brain metastasis detection for DeepMedic, increasing the sensitivity from 85.3% to 97.5%. Alternatively, it improves the precision from 69.1% to 98.7%. Comparing DeepMedic+ with DeepMedic with the same VSS loss, 44.4% of the false positive metastases are reduced in the high sensitivity model and the precision reaches 99.6% for the high specificity model. The mean dice coefficient for all metastases is about 0.81. With the ensemble of the high sensitivity and high specificity models, on average only 1.5 false positive metastases per patient needs further check, while the majority of true positive metastases are confirmed. The ensemble learning is able to distinguish high confidence true positive metastases from metastases candidates that require special expert review or further follow-up, being particularly well-fit to the requirements of expert support in real clinical practice.Comment: Implementation is available to public at https://github.com/YixingHuang/DeepMedicPlu

    Computation Framework for Lesion Detection and Response Assessment Based Upon Physiological Imaging for Supporting Radiation Therapy of Brain Metastases.

    Full text link
    Brain metastases are the most prevalent form of cancer in the central nervous system and up to 45% of cancer patients eventually develop brain metastases during their illness. Selection of whole brain radiotherapy (WBRT) versus stereotactic radiosurgery, two routine treatments for brain metastases, highly depends on the number and size of metastatic lesions in a patient. Our clinical investigations reveal that up to 40% of brain metastases with a diameter <5mm could be missed in a routine clinical diagnosis using contrast-enhanced MRI. Hence, this dissertation initially describes the development of a template-matching based computer-aided detection (CAD) system for automatic detection of small lesions in post-Gd T1-weighted MRI to assist radiological diagnosis. Our results showed a significant improvement in detecting small lesions using the proposed methodology. When a cancer patient is given a treatment, it is very important to assess the tumor response to therapy early. This is traditionally performed by measuring a change in the gross tumor volume. However, changes in tumor physiology, which happen earlier than the volumetric changes, have the potential to provide a better means in prediction of tumor response to therapy and also could be used for therapy guidance. But, there are several challenges in assessment of tumor response to therapy, especially due to the heterogeneous distribution pattern of the physiological parameters in a tumor, image mis-registration issues caused by tumor shrinkage/increase across the time of followups, lack of methodologies combining information from different physiological viewpoints, and etc. Hence, this dissertation mainly focused on development of techniques overcoming these challenges using information from two important aspects of tumor physiology: tumor vascular and cellularity properties derived from dynamic contrast-enhance and diffusion-weighted MRI. Our proposed techniques were evaluated with lesions treated by either WBRT alone or combined with Bortezomib as a radiation sensitizer. We found that changes in both tumor vascular and cellularity properties play an important but different role for predicting tumor response to therapy, depending on the tumor types and underlying treatment. Also, we found that combing the two parameters provides a better tool for response assessment.PHDBiomedical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/97989/1/rezaf_1.pd

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    Deep Learning Techniques for Multi-Dimensional Medical Image Analysis

    Get PDF
    corecore