9 research outputs found

    Registration of pre-operative lung cancer PET/CT scans with post-operative histopathology images

    Get PDF
    Non-invasive imaging modalities used in the diagnosis of lung cancer, such as Positron Emission Tomography (PET) or Computed Tomography (CT), currently provide insuffcient information about the cellular make-up of the lesion microenvironment, unless they are compared against the gold standard of histopathology.The aim of this retrospective study was to build a robust imaging framework for registering in vivo and post-operative scans from lung cancer patients, in order to have a global, pathology-validated multimodality map of the tumour and its surroundings.;Initial experiments were performed on tissue-mimicking phantoms, to test different shape reconstruction methods. The choice of interpolator and slice thickness were found to affect the algorithm's output, in terms of overall volume and local feature recovery. In the second phase of the study, nine lung cancer patients referred for radical lobectomy were recruited. Resected specimens were inflated with agar, sliced at 5 mm intervals, and each cross-section was photographed. The tumour area was delineated on the block-face pathology images and on the preoperative PET/CT scans.;Airway segments were also added to the reconstructed models, to act as anatomical fiducials. Binary shapes were pre-registered by aligning their minimal bounding box axes, and subsequently transformed using rigid registration. In addition, histopathology slides were matched to the block-face photographs using moving least squares algorithm.;A two-step validation process was used to evaluate the performance of the proposed method against manual registration carried out by experienced consultants. In two out of three cases, experts rated the results generated by the algorithm as the best output, suggesting that the developed framework outperforms the current standard practice.Non-invasive imaging modalities used in the diagnosis of lung cancer, such as Positron Emission Tomography (PET) or Computed Tomography (CT), currently provide insuffcient information about the cellular make-up of the lesion microenvironment, unless they are compared against the gold standard of histopathology.The aim of this retrospective study was to build a robust imaging framework for registering in vivo and post-operative scans from lung cancer patients, in order to have a global, pathology-validated multimodality map of the tumour and its surroundings.;Initial experiments were performed on tissue-mimicking phantoms, to test different shape reconstruction methods. The choice of interpolator and slice thickness were found to affect the algorithm's output, in terms of overall volume and local feature recovery. In the second phase of the study, nine lung cancer patients referred for radical lobectomy were recruited. Resected specimens were inflated with agar, sliced at 5 mm intervals, and each cross-section was photographed. The tumour area was delineated on the block-face pathology images and on the preoperative PET/CT scans.;Airway segments were also added to the reconstructed models, to act as anatomical fiducials. Binary shapes were pre-registered by aligning their minimal bounding box axes, and subsequently transformed using rigid registration. In addition, histopathology slides were matched to the block-face photographs using moving least squares algorithm.;A two-step validation process was used to evaluate the performance of the proposed method against manual registration carried out by experienced consultants. In two out of three cases, experts rated the results generated by the algorithm as the best output, suggesting that the developed framework outperforms the current standard practice

    Image Registration Workshop Proceedings

    Get PDF
    Automatic image registration has often been considered as a preliminary step for higher-level processing, such as object recognition or data fusion. But with the unprecedented amounts of data which are being and will continue to be generated by newly developed sensors, the very topic of automatic image registration has become and important research topic. This workshop presents a collection of very high quality work which has been grouped in four main areas: (1) theoretical aspects of image registration; (2) applications to satellite imagery; (3) applications to medical imagery; and (4) image registration for computer vision research

    Characterising pattern asymmetry in pigmented skin lesions

    Get PDF
    Abstract. In clinical diagnosis of pigmented skin lesions asymmetric pigmentation is often indicative of melanoma. This paper describes a method and measures for characterizing lesion symmetry. The estimate of mirror symmetry is computed first for a number of axes at different degrees of rotation with respect to the lesion centre. The statistics of these estimates are the used to assess the overall symmetry. The method is applied to three different lesion representations showing the overall pigmentation, the pigmentation pattern, and the pattern of dermal melanin. The best measure is a 100% sensitive and 96% specific indicator of melanoma on a test set of 33 lesions, with a separate training set consisting of 66 lesions

    Combining global and local information for the segmentation of MR images of the brain

    Get PDF
    Magnetic resonance imaging can provide high resolution volumetric images of the brain with exceptional soft tissue contrast. These factors allow the complex structure of the brain to be clearly visualised. This has lead to the development of quantitative methods to analyse neuroanatomical structures. In turn, this has promoted the use of computational methods to automate and improve these techniques. This thesis investigates methods to accurately segment MRI images of the brain. The use of global and local image information is considered, where global information includes image intensity distributions, means and variances and local information is based on the relationship between spatially neighbouring voxels. Methods are explored that aim to improve the classification and segmentation of MR images of the brain by combining these elements. Some common artefacts exist in MR brain images that can be seriously detrimental to image analysis methods. Methods to correct for these artifacts are assessed by exploring their effect, first with some well established classification methods and then with methods that combine global information with local information in the form of a Markov random field model. Another characteristic of MR images is the partial volume effect that occurs where signals from different tissues become mixed over the finite volume of a voxel. This effect is demonstrated and quantified using a simulation. Analysis methods that address these issues are tested on simulated and real MR images. They are also applied to study the structure of the temporal lobes in a group of patients with temporal lobe epilepsy. The results emphasise the benefits and limitations of applying these methods to a problem of this nature. The work in this thesis demonstrates the advantages of using global and local information together in the segmentation of MR brain images and proposes a generalised framework that allows this information to be combined in a flexible way

    Visual Exploration And Information Analytics Of High-Dimensional Medical Images

    Get PDF
    Data visualization has transformed how we analyze increasingly large and complex data sets. Advanced visual tools logically represent data in a way that communicates the most important information inherent within it and culminate the analysis with an insightful conclusion. Automated analysis disciplines - such as data mining, machine learning, and statistics - have traditionally been the most dominant fields for data analysis. It has been complemented with a near-ubiquitous adoption of specialized hardware and software environments that handle the storage, retrieval, and pre- and postprocessing of digital data. The addition of interactive visualization tools allows an active human participant in the model creation process. The advantage is a data-driven approach where the constraints and assumptions of the model can be explored and chosen based on human insight and confirmed on demand by the analytic system. This translates to a better understanding of data and a more effective knowledge discovery. This trend has become very popular across various domains, not limited to machine learning, simulation, computer vision, genetics, stock market, data mining, and geography. In this dissertation, we highlight the role of visualization within the context of medical image analysis in the field of neuroimaging. The analysis of brain images has uncovered amazing traits about its underlying dynamics. Multiple image modalities capture qualitatively different internal brain mechanisms and abstract it within the information space of that modality. Computational studies based on these modalities help correlate the high-level brain function measurements with abnormal human behavior. These functional maps are easily projected in the physical space through accurate 3-D brain reconstructions and visualized in excellent detail from different anatomical vantage points. Statistical models built for comparative analysis across subject groups test for significant variance within the features and localize abnormal behaviors contextualizing the high-level brain activity. Currently, the task of identifying the features is based on empirical evidence, and preparing data for testing is time-consuming. Correlations among features are usually ignored due to lack of insight. With a multitude of features available and with new emerging modalities appearing, the process of identifying the salient features and their interdependencies becomes more difficult to perceive. This limits the analysis only to certain discernible features, thus limiting human judgments regarding the most important process that governs the symptom and hinders prediction. These shortcomings can be addressed using an analytical system that leverages data-driven techniques for guiding the user toward discovering relevant hypotheses. The research contributions within this dissertation encompass multidisciplinary fields of study not limited to geometry processing, computer vision, and 3-D visualization. However, the principal achievement of this research is the design and development of an interactive system for multimodality integration of medical images. The research proceeds in various stages, which are important to reach the desired goal. The different stages are briefly described as follows: First, we develop a rigorous geometry computation framework for brain surface matching. The brain is a highly convoluted structure of closed topology. Surface parameterization explicitly captures the non-Euclidean geometry of the cortical surface and helps derive a more accurate registration of brain surfaces. We describe a technique based on conformal parameterization that creates a bijective mapping to the canonical domain, where surface operations can be performed with improved efficiency and feasibility. Subdividing the brain into a finite set of anatomical elements provides the structural basis for a categorical division of anatomical view points and a spatial context for statistical analysis. We present statistically significant results of our analysis into functional and morphological features for a variety of brain disorders. Second, we design and develop an intelligent and interactive system for visual analysis of brain disorders by utilizing the complete feature space across all modalities. Each subdivided anatomical unit is specialized by a vector of features that overlap within that element. The analytical framework provides the necessary interactivity for exploration of salient features and discovering relevant hypotheses. It provides visualization tools for confirming model results and an easy-to-use interface for manipulating parameters for feature selection and filtering. It provides coordinated display views for visualizing multiple features across multiple subject groups, visual representations for highlighting interdependencies and correlations between features, and an efficient data-management solution for maintaining provenance and issuing formal data queries to the back end

    Analysis of contrast-enhanced medical images.

    Get PDF
    Early detection of human organ diseases is of great importance for the accurate diagnosis and institution of appropriate therapies. This can potentially prevent progression to end-stage disease by detecting precursors that evaluate organ functionality. In addition, it also assists the clinicians for therapy evaluation, tracking diseases progression, and surgery operations. Advances in functional and contrast-enhanced (CE) medical images enabled accurate noninvasive evaluation of organ functionality due to their ability to provide superior anatomical and functional information about the tissue-of-interest. The main objective of this dissertation is to develop a computer-aided diagnostic (CAD) system for analyzing complex data from CE magnetic resonance imaging (MRI). The developed CAD system has been tested in three case studies: (i) early detection of acute renal transplant rejection, (ii) evaluation of myocardial perfusion in patients with ischemic heart disease after heart attack; and (iii), early detection of prostate cancer. However, developing a noninvasive CAD system for the analysis of CE medical images is subject to multiple challenges, including, but are not limited to, image noise and inhomogeneity, nonlinear signal intensity changes of the images over the time course of data acquisition, appearances and shape changes (deformations) of the organ-of-interest during data acquisition, determination of the best features (indexes) that describe the perfusion of a contrast agent (CA) into the tissue. To address these challenges, this dissertation focuses on building new mathematical models and learning techniques that facilitate accurate analysis of CAs perfusion in living organs and include: (i) accurate mathematical models for the segmentation of the object-of-interest, which integrate object shape and appearance features in terms of pixel/voxel-wise image intensities and their spatial interactions; (ii) motion correction techniques that combine both global and local models, which exploit geometric features, rather than image intensities to avoid problems associated with nonlinear intensity variations of the CE images; (iii) fusion of multiple features using the genetic algorithm. The proposed techniques have been integrated into CAD systems that have been tested in, but not limited to, three clinical studies. First, a noninvasive CAD system is proposed for the early and accurate diagnosis of acute renal transplant rejection using dynamic contrast-enhanced MRI (DCE-MRI). Acute rejection–the immunological response of the human immune system to a foreign kidney–is the most sever cause of renal dysfunction among other diagnostic possibilities, including acute tubular necrosis and immune drug toxicity. In the U.S., approximately 17,736 renal transplants are performed annually, and given the limited number of donors, transplanted kidney salvage is an important medical concern. Thus far, biopsy remains the gold standard for the assessment of renal transplant dysfunction, but only as the last resort because of its invasive nature, high cost, and potential morbidity rates. The diagnostic results of the proposed CAD system, based on the analysis of 50 independent in-vivo cases were 96% with a 95% confidence interval. These results clearly demonstrate the promise of the proposed image-based diagnostic CAD system as a supplement to the current technologies, such as nuclear imaging and ultrasonography, to determine the type of kidney dysfunction. Second, a comprehensive CAD system is developed for the characterization of myocardial perfusion and clinical status in heart failure and novel myoregeneration therapy using cardiac first-pass MRI (FP-MRI). Heart failure is considered the most important cause of morbidity and mortality in cardiovascular disease, which affects approximately 6 million U.S. patients annually. Ischemic heart disease is considered the most common underlying cause of heart failure. Therefore, the detection of the heart failure in its earliest forms is essential to prevent its relentless progression to premature death. While current medical studies focus on detecting pathological tissue and assessing contractile function of the diseased heart, this dissertation address the key issue of the effects of the myoregeneration therapy on the associated blood nutrient supply. Quantitative and qualitative assessment in a cohort of 24 perfusion data sets demonstrated the ability of the proposed framework to reveal regional perfusion improvements with therapy, and transmural perfusion differences across the myocardial wall; thus, it can aid in follow-up on treatment for patients undergoing the myoregeneration therapy. Finally, an image-based CAD system for early detection of prostate cancer using DCE-MRI is introduced. Prostate cancer is the most frequently diagnosed malignancy among men and remains the second leading cause of cancer-related death in the USA with more than 238,000 new cases and a mortality rate of about 30,000 in 2013. Therefore, early diagnosis of prostate cancer can improve the effectiveness of treatment and increase the patient’s chance of survival. Currently, needle biopsy is the gold standard for the diagnosis of prostate cancer. However, it is an invasive procedure with high costs and potential morbidity rates. Additionally, it has a higher possibility of producing false positive diagnosis due to relatively small needle biopsy samples. Application of the proposed CAD yield promising results in a cohort of 30 patients that would, in the near future, represent a supplement of the current technologies to determine prostate cancer type. The developed techniques have been compared to the state-of-the-art methods and demonstrated higher accuracy as shown in this dissertation. The proposed models (higher-order spatial interaction models, shape models, motion correction models, and perfusion analysis models) can be used in many of today’s CAD applications for early detection of a variety of diseases and medical conditions, and are expected to notably amplify the accuracy of CAD decisions based on the automated analysis of CE images

    Probabilistic partial volume modelling of biomedical tomographic image data

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Irish Machine Vision and Image Processing Conference Proceedings 2017

    Get PDF

    Interslice interpolation of anisotropic 3D images using multiresolution contour correlation

    No full text
    corecore