14 research outputs found

    Computer-aided detection of polyps in CT colonography

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Enhanced computer assisted detection of polyps in CT colonography

    Get PDF
    This thesis presents a novel technique for automatically detecting colorectal polyps in computed tomography colonography (CTC). The objective of the documented computer assisted diagnosis (CAD) technique is to deal with the issue of false positive detections without adversely affecting polyp detection sensitivity. The thesis begins with an overview of CTC and a review of the associated research areas, with particular attention given to CAD-CTC. This review identifies excessive false positive detections as a common problem associated with current CAD-CTC techniques. Addressing this problem constitutes the major contribution of this thesis. The documented CAD-CTC technique is trained with, and evaluated using, a series of clinical CTC data sets These data sets contain polyps with a range of different sizes and morphologies. The results presented m this thesis indicate the validity of the developed CAD-CTC technique and demonstrate its effectiveness m accurately detecting colorectal polyps while significantly reducing the number of false positive detections

    Multidimensional image analysis of cardiac function in MRI

    Get PDF
    Cardiac morphology is a key indicator of cardiac health. Important metrics that are currently in clinical use are left-ventricle cardiac ejection fraction, cardiac muscle (myocardium) mass, myocardium thickness and myocardium thickening over the cardiac cycle. Advances in imaging technologies have led to an increase in temporal and spatial resolution. Such an increase in data presents a laborious task for medical practitioners to analyse. In this thesis, measurement of the cardiac left-ventricle function is achieved by developing novel methods for the automatic segmentation of the left-ventricle blood-pool and the left ventricle myocardium boundaries. A preliminary challenge faced in this task is the removal of noise from Magnetic Resonance Imaging (MRI) data, which is addressed by using advanced data filtering procedures. Two mechanisms for left-ventricle segmentation are employed. Firstly segmentation of the left ventricle blood-pool for the measurement of ejection fraction is undertaken in the signal intensity domain. Utilising the high discrimination between blood and tissue, a novel methodology based on a statistical partitioning method offers success in localising and segmenting the blood pool of the left ventricle. From this initialisation, the estimation of the outer wall (epi-cardium) of the left ventricle can be achieved using gradient information and prior knowledge. Secondly, a more involved method for extracting the myocardium of the leftventricle is developed, that can better perform segmentation in higher dimensions. Spatial information is incorporated in the segmentation by employing a gradient-based boundary evolution. A level-set scheme is implemented and a novel formulation for the extraction of the cardiac muscle is introduced. Two surfaces, representing the inner and the outer boundaries of the left-ventricle, are simultaneously evolved using a coupling function and supervised with a probabilistic model of expertly assisted manual segmentations

    Eigenspace Template Matching for Detection of Lacunar Infarcts on MR Images

    Get PDF
    Abstract Detection of lacunar infarcts is important because their presence indicates an increased risk of severe cerebral infarction. However, accurate identification is often hindered by the difficulty in distinguishing between lacunar infarcts and enlarged Virchow-Robin spaces. Therefore, we developed a computer-aided detection (CAD) scheme for the detection of lacunar infarcts. Although our previous CAD method indicated a sensitivity of 96.8 % with 0.71 false positives (FPs) per slice, further reduction of FPs remained an issue for the clinical application. Thus, the purpose of this study is to improve our CAD scheme by using template matching in the eigenspace. Conventional template matching is useful for the reduction of FPs, but it has the following two pitfalls: (1) It needs to maintain a large number of templates to improve the detection performance, and (2) calculation of the crosscorrelation coefficient with these templates is time consuming. To solve these problems, we used template matching in the lower dimension space made by a principal component analysis. Our database comprised 1,143 T 1 -and T 2 -weighted images obtained from 132 patients. The proposed method was evaluated by using twofold cross-validation. By using this method, 34.1 % of FPs was eliminated compared with our previous method. The final performance indicated that the sensitivity of the detection of lacunar infarcts was 96.8 % with 0.47 FPs per slice. Therefore, the modified CAD scheme could improve FP rate without a significant reduction in the true positive rate

    Pattern Recognition

    Get PDF
    Pattern recognition is a very wide research field. It involves factors as diverse as sensors, feature extraction, pattern classification, decision fusion, applications and others. The signals processed are commonly one, two or three dimensional, the processing is done in real- time or takes hours and days, some systems look for one narrow object class, others search huge databases for entries with at least a small amount of similarity. No single person can claim expertise across the whole field, which develops rapidly, updates its paradigms and comprehends several philosophical approaches. This book reflects this diversity by presenting a selection of recent developments within the area of pattern recognition and related fields. It covers theoretical advances in classification and feature extraction as well as application-oriented works. Authors of these 25 works present and advocate recent achievements of their research related to the field of pattern recognition

    A framework for tumor segmentation and interactive immersive visualization of medical image data for surgical planning

    Get PDF
    This dissertation presents the framework for analyzing and visualizing digital medical images. Two new segmentation methods have been developed: a probability based segmentation algorithm, and a segmentation algorithm that uses a fuzzy rule based system to generate similarity values for segmentation. A visualization software application has also been developed to effectively view and manipulate digital medical images on a desktop computer as well as in an immersive environment.;For the probabilistic segmentation algorithm, image data are first enhanced by manually setting the appropriate window center and width, and if needed a sharpening or noise removal filter is applied. To initialize the segmentation process, a user places a seed point within the object of interest and defines a search region for segmentation. Based on the pixels\u27 spatial and intensity properties, a probabilistic selection criterion is used to extract pixels with a high probability of belonging to the object. To facilitate the segmentation of multiple slices, an automatic seed selection algorithm was developed to keep the seeds in the object as its shape and/or location changes between consecutive slices.;The second segmentation method, a new segmentation method using a fuzzy rule based system to segment tumors in a three-dimensional CT data was also developed. To initialize the segmentation process, the user selects a region of interest (ROI) within the tumor in the first image of the CT study set. Using the ROI\u27s spatial and intensity properties, fuzzy inputs are generated for use in the fuzzy rules inference system. Using a set of predefined fuzzy rules, the system generates a defuzzified output for every pixel in terms of similarity to the object. Pixels with the highest similarity values are selected as tumor. This process is automatically repeated for every subsequent slice in the CT set without further user input, as the segmented region from the previous slice is used as the ROI for the current slice. This creates a propagation of information from the previous slices, used to segment the current slice. The membership functions used during the fuzzification and defuzzification processes are adaptive to the changes in the size and pixel intensities of the current ROI. The proposed method is highly customizable to suit different needs of a user, requiring information from only a single two-dimensional image.;Segmentation results from both algorithms showed success in segmenting the tumor from seven of the ten CT datasets with less than 10% false positive errors and five test cases with less than 10% false negative errors. The consistency of the segmentation results statistics also showed a high repeatability factor, with low values of inter- and intra-user variability for both methods.;The visualization software developed is designed to load and display any DICOM/PACS compatible three-dimensional image data for visualization and interaction in an immersive virtual environment. The software uses the open-source libraries DCMTK: DICOM Toolkit for parsing of digital medical images, Coin3D and SimVoleon for scenegraph management and volume rendering, and VRJuggler for virtual reality display and interaction. A user can apply pseudo-coloring in real time with multiple interactive clipping planes to slice into the volume for an interior view. A windowing feature controls the tissue density ranges to display. A wireless gamepad controller as well as a simple and intuitive menu interface control user interactions. The software is highly scalable as it can be used on a single desktop computer to a cluster of computers for an immersive multi-projection virtual environment. By wearing a pair of stereo goggles, the surgeon is immersed within the model itself, thus providing a sense of realism as if the surgeon is inside the patient.;The tools developed in this framework are designed to improve patient care by fostering the widespread use of advanced visualization and computational intelligence in preoperative planning, surgical training, and diagnostic assistance. Future work includes further improvements to both segmentation methods with plans to incorporate the use of deformable models and level set techniques to include tumor shape features as part of the segmentation criteria. For the surgical planning components, additional controls and interactions with the simulated endoscopic camera and the ability to segment the colon or a selected region of the airway for a fixed-path navigation as a full virtual endoscopy tool will also be implemented. (Abstract shortened by UMI.

    Edge cross-section profile for colonoscopic object detection

    Get PDF
    Colorectal cancer is the second leading cause of cancer-related deaths, claiming close to 50,000 lives annually in the United States alone. Colonoscopy is an important screening tool that has contributed to a significant decline in colorectal cancer-related deaths. During colonoscopy, a tiny video camera at the tip of the endoscope generates a video signal of the internal mucosa of the human colon. The video data is displayed on a monitor for real-time diagnosis by the endoscopist. Despite the success of colonoscopy in lowering cancer-related deaths, a significant miss rate for detection of both large polyps and cancers is estimated around 4-12%. As a result, in recent years, many computer-aided object detection techniques have been developed with the ultimate goal to assist the endoscopist in lowering the polyp miss rate. Automatic object detection in recorded video data during colonoscopy is challenging due to the noisy nature of endoscopic images caused by camera motion, strong light reflections, the wide angle lens that cannot be automatically focused, and the location and appearance variations of objects within the colon. The unique characteristics of colonoscopy video require new image/video analysis techniques. The dissertation presents our investigation on edge cross-section profile (ECSP), a local appearance model, for colonoscopic object detection. We propose several methods to derive new features on ECSP from its surrounding region pixels, its first-order derivative profile, and its second-order derivative profile. These ECSP features describe discriminative patterns for different types of objects in colonoscopy. The new algorithms and software using the ECSP features can effectively detect three representative types of objects and extract their corresponding semantic unit in terms of both accuracy and analysis time. The main contributions of dissertation are summarized as follows. The dissertation presents 1) a new ECSP calculation method and feature-based ECSP method that extracts features on ECSP for object detection, 2) edgeless ECSP method that calculates ECSP without using edges, 3) part-based multi-derivative ECSP algorithm that segments ECSP, its 1st - order and its 2nd - order derivative functions into parts and models each part using the method that is suitable to that part, 4) ECSP based algorithms for detecting three representative types of colonoscopic objects including appendiceal orifices, endoscopes during retroflexion operations, and polyps and extracting videos or segmented shots containing these objects as semantic units, and 5) a software package that implements these techniques and provides meaningful visual feedback of the detected results to the endoscopist. Ideally, we would like the software to provide feedback to the endoscopist before the next video frame becomes available and to process video data at the rate in which the data are captured (typically at about 30 frames per second (fps)). This real-time requirement is difficult to achieve using today\u27s affordable off-the-shelf workstations. We aim for achieving near real-time performance where the analysis and feedback complete at the rate of at least 1 fps. The dissertation has the following broad impacts. Firstly, the performance study shows that our proposed ECSP based techniques are promising both in terms of the detection rate and execution time for detecting the appearance of the three aforementioned types of objects in colonoscopy video. Our ECSP based techniques can be extended to both detect other types of colonoscopic objects such as diverticula, lumen and vessel, and analyze other endoscopy procedures, such as laparoscopy, upper gastrointestinal endoscopy, wireless capsule endoscopy and EGD. Secondly, to our best knowledge, our polyp detection system is the only computer-aided system that can warn the endoscopist the appearance of polyps in near real time. Our retroflexion detection system is also the first computer-aided system that can detect retroflexion in near real-time. Retroflexion is a maneuver used by the endoscopist to inspect the colon area that is hard to reach. The use of our system in future clinical trials may contribute to the decline in the polyp miss rate during live colonoscopy. Our system may be used as a training platform for novice endoscopists. Lastly, the automatic documentation of detected semantic units of colonoscopic objects can be helpful to discover unknown patterns of colorectal cancers or new diseases and used as educational resources for endoscopic research

    Characterization and modelling of complex motion patterns

    Get PDF
    Movement analysis is the principle of any interaction with the world and the survival of living beings completely depends on the effciency of such analysis. Visual systems have remarkably developed eficient mechanisms that analyze motion at different levels, allowing to recognize objects in dynamical and cluttered environments. In artificial vision, there exist a wide spectrum of applications for which the study of complex movements is crucial to recover salient information. Yet each domain may be different in terms of scenarios, complexity and relationships, a common denominator is that all of them require a dynamic understanding that captures the relevant information. Overall, current strategies are highly dependent on the appearance characterization and usually they are restricted to controlled scenarios. This thesis proposes a computational framework that is inspired in known motion perception mechanisms and structured as a set of modules. Each module is in due turn composed of a set of computational strategies that provide qualitative and quantitative descriptions of the dynamic associated to a particular movement. Diverse applications were herein considered and an extensive validation was performed for each of them. Each of the proposed strategies has shown to be reliable at capturing the dynamic patterns of different tasks, identifying, recognizing, tracking and even segmenting objects in sequences of video.Resumen. El análisis del movimiento es el principio de cualquier interacción con el mundo y la supervivencia de los seres vivos depende completamente de la eficiencia de este tipo de análisis. Los sistemas visuales notablemente han desarrollado mecanismos eficientes que analizan el movimiento en diferentes niveles, lo cual permite reconocer objetos en entornos dinámicos y saturados. En visión artificial existe un amplio espectro de aplicaciones para las cuales el estudio de los movimientos complejos es crucial para recuperar información saliente. A pesar de que cada dominio puede ser diferente en términos de los escenarios, la complejidad y las relaciones de los objetos en movimiento, un común denominador es que todos ellos requieren una comprensión dinámica para capturar información relevante. En general, las estrategias actuales son altamente dependientes de la caracterización de la apariencia y por lo general están restringidos a escenarios controlados. Esta tesis propone un marco computacional que se inspira en los mecanismos de percepción de movimiento conocidas y esta estructurado como un conjunto de módulos. Cada módulo esta a su vez compuesto por un conjunto de estrategias computacionales que proporcionan descripciones cualitativas y cuantitativas de la dinámica asociada a un movimiento particular. Diversas aplicaciones fueron consideradas en este trabajo y una extensa validación se llevó a cabo para cada uno de ellas. Cada una de las estrategias propuestas ha demostrado ser fiable en la captura de los patrones dinámicos de diferentes tareas identificando, reconociendo, siguiendo e incluso segmentando objetos en secuencias de video.Doctorad

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention
    corecore