107 research outputs found

    Sub-pixel Registration In Computational Imaging And Applications To Enhancement Of Maxillofacial Ct Data

    Get PDF
    In computational imaging, data acquired by sampling the same scene or object at different times or from different orientations result in images in different coordinate systems. Registration is a crucial step in order to be able to compare, integrate and fuse the data obtained from different measurements. Tomography is the method of imaging a single plane or slice of an object. A Computed Tomography (CT) scan, also known as a CAT scan (Computed Axial Tomography scan), is a Helical Tomography, which traditionally produces a 2D image of the structures in a thin section of the body. It uses X-ray, which is ionizing radiation. Although the actual dose is typically low, repeated scans should be limited. In dentistry, implant dentistry in specific, there is a need for 3D visualization of internal anatomy. The internal visualization is mainly based on CT scanning technologies. The most important technological advancement which dramatically enhanced the clinician\u27s ability to diagnose, treat, and plan dental implants has been the CT scan. Advanced 3D modeling and visualization techniques permit highly refined and accurate assessment of the CT scan data. However, in addition to imperfections of the instrument and the imaging process, it is not uncommon to encounter other unwanted artifacts in the form of bright regions, flares and erroneous pixels due to dental bridges, metal braces, etc. Currently, removing and cleaning up the data from acquisition backscattering imperfections and unwanted artifacts is performed manually, which is as good as the experience level of the technician. On the other hand the process is error prone, since the editing process needs to be performed image by image. We address some of these issues by proposing novel registration methods and using stonecast models of patient\u27s dental imprint as reference ground truth data. Stone-cast models were originally used by dentists to make complete or partial dentures. The CT scan of such stone-cast models can be used to automatically guide the cleaning of patients\u27 CT scans from defects or unwanted artifacts, and also as an automatic segmentation system for the outliers of the CT scan data without use of stone-cast models. Segmented data is subsequently used to clean the data from artifacts using a new proposed 3D inpainting approach

    Contributions of Continuous Max-Flow Theory to Medical Image Processing

    Get PDF
    Discrete graph cuts and continuous max-flow theory have created a paradigm shift in many areas of medical image processing. As previous methods limited themselves to analytically solvable optimization problems or guaranteed only local optimizability to increasingly complex and non-convex functionals, current methods based now rely on describing an optimization problem in a series of general yet simple functionals with a global, but non-analytic, solution algorithms. This has been increasingly spurred on by the availability of these general-purpose algorithms in an open-source context. Thus, graph-cuts and max-flow have changed every aspect of medical image processing from reconstruction to enhancement to segmentation and registration. To wax philosophical, continuous max-flow theory in particular has the potential to bring a high degree of mathematical elegance to the field, bridging the conceptual gap between the discrete and continuous domains in which we describe different imaging problems, properties and processes. In Chapter 1, we use the notion of infinitely dense and infinitely densely connected graphs to transfer between the discrete and continuous domains, which has a certain sense of mathematical pedantry to it, but the resulting variational energy equations have a sense of elegance and charm. As any application of the principle of duality, the variational equations have an enigmatic side that can only be decoded with time and patience. The goal of this thesis is to show the contributions of max-flow theory through image enhancement and segmentation, increasing incorporation of topological considerations and increasing the role played by user knowledge and interactivity. These methods will be rigorously grounded in calculus of variations, guaranteeing fuzzy optimality and providing multiple solution approaches to addressing each individual problem

    Generative-Discriminative Low Rank Decomposition for Medical Imaging Applications

    Get PDF
    In this thesis, we propose a method that can be used to extract biomarkers from medical images toward early diagnosis of abnormalities. Surge of demand for biomarkers and availability of medical images in the recent years call for accurate, repeatable, and interpretable approaches for extracting meaningful imaging features. However, extracting such information from medical images is a challenging task because the number of pixels (voxels) in a typical image is in order of millions while even a large sample-size in medical image dataset does not usually exceed a few hundred. Nevertheless, depending on the nature of an abnormality, only a parsimonious subset of voxels is typically relevant to the disease; therefore various notions of sparsity are exploited in this thesis to improve the generalization performance of the prediction task. We propose a novel discriminative dimensionality reduction method that yields good classification performance on various datasets without compromising the clinical interpretability of the results. This is achieved by combining the modelling strength of generative learning framework and the classification performance of discriminative learning paradigm. Clinical interpretability can be viewed as an additional measure of evaluation and is also helpful in designing methods that account for the clinical prior such as association of certain areas in a brain to a particular cognitive task or connectivity of some brain regions via neural fibres. We formulate our method as a large-scale optimization problem to solve a constrained matrix factorization. Finding an optimal solution of the large-scale matrix factorization renders off-the-shelf solver computationally prohibitive; therefore, we designed an efficient algorithm based on the proximal method to address the computational bottle-neck of the optimization problem. Our formulation is readily extended for different scenarios such as cases where a large cohort of subjects has uncertain or no class labels (semi-supervised learning) or a case where each subject has a battery of imaging channels (multi-channel), \etc. We show that by using various notions of sparsity as feasible sets of the optimization problem, we can encode different forms of prior knowledge ranging from brain parcellation to brain connectivity

    Adaptive processing of thin structures to augment segmentation of dual-channel structural MRI of the human brain

    Get PDF
    This thesis presents a method for the segmentation of dual-channel structural magnetic resonance imaging (MRI) volumes of the human brain into four tissue classes. The state-of-the-art FSL FAST segmentation software (Zhang et al., 2001) is in widespread clinical use, and so it is considered a benchmark. A significant proportion of FAST’s errors has been shown to be localised to cortical sulci and blood vessels; this issue has driven the developments in this thesis, rather than any particular clinical demand. The original theme lies in preserving and even restoring these thin structures, poorly resolved in typical clinical MRI. Bright plate-shaped sulci and dark tubular vessels are best contrasted from the other tissues using the T2- and PD-weighted data, respectively. A contrasting tube detector algorithm (based on Frangi et al., 1998) was adapted to detect both structures, with smoothing (based on Westin and Knutsson, 2006) of an intermediate tensor representation to ensure smoothness and fuller coverage of the maps. The segmentation strategy required the MRI volumes to be upscaled to an artificial high resolution where a small partial volume label set would be valid and the segmentation process would be simplified. A resolution enhancement process (based on Salvado et al., 2006) was significantly modified to smooth homogeneous regions and sharpen their boundaries in dual-channel data. In addition, it was able to preserve the mapped thin structures’ intensities or restore them to pure tissue values. Finally, the segmentation phase employed a relaxation-based labelling optimisation process (based on Li et al., 1997) to improve accuracy, rather than more efficient greedy methods which are typically used. The thin structure location prior maps and the resolution-enhanced data also helped improve the labelling accuracy, particularly around sulci and vessels. Testing was performed on the aged LBC1936 clinical dataset and on younger brain volumes acquired at the SHEFC Brain Imaging Centre (Western General Hospital, Edinburgh, UK), as well as the BrainWeb phantom. Overall, the proposed methods rivalled and often improved segmentation accuracy compared to FAST, where the ground truth was produced by a radiologist using software designed for this project. The performance in pathological and atrophied brain volumes, and the differences with the original segmentation algorithm on which it was based (van Leemput et al., 2003), were also examined. Among the suggestions for future development include a soft labelling consensus formation framework to mitigate rater bias in the ground truth, and contour-based models of the brain parenchyma to provide additional structural constraints

    Predictive decoding of neural data

    Get PDF
    In the last five decades the number of techniques available for non-invasive functional imaging has increased dramatically. Researchers today can choose from a variety of imaging modalities that include EEG, MEG, PET, SPECT, MRI, and fMRI. This doctoral dissertation offers a methodology for the reliable analysis of neural data at different levels of investigation. By using statistical learning algorithms the proposed approach allows single-trial analysis of various neural data by decoding them into variables of interest. Unbiased testing of the decoder on new samples of the data provides a generalization assessment of decoding performance reliability. Through consecutive analysis of the constructed decoder\u27s sensitivity it is possible to identify neural signal components relevant to the task of interest. The proposed methodology accounts for covariance and causality structures present in the signal. This feature makes it more powerful than conventional univariate methods which currently dominate the neuroscience field. Chapter 2 describes the generic approach toward the analysis of neural data using statistical learning algorithms. Chapter 3 presents an analysis of results from four neural data modalities: extracellular recordings, EEG, MEG, and fMRI. These examples demonstrate the ability of the approach to reveal neural data components which cannot be uncovered with conventional methods. A further extension of the methodology, Chapter 4 is used to analyze data from multiple neural data modalities: EEG and fMRI. The reliable mapping of data from one modality into the other provides a better understanding of the underlying neural processes. By allowing the spatial-temporal exploration of neural signals under loose modeling assumptions, it removes potential bias in the analysis of neural data due to otherwise possible forward model misspecification. The proposed methodology has been formalized into a free and open source Python framework for statistical learning based data analysis. This framework, PyMVPA, is described in Chapter 5

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE

    Accurate skull modeling for EEG source imaging

    Get PDF

    Image Registration Workshop Proceedings

    Get PDF
    Automatic image registration has often been considered as a preliminary step for higher-level processing, such as object recognition or data fusion. But with the unprecedented amounts of data which are being and will continue to be generated by newly developed sensors, the very topic of automatic image registration has become and important research topic. This workshop presents a collection of very high quality work which has been grouped in four main areas: (1) theoretical aspects of image registration; (2) applications to satellite imagery; (3) applications to medical imagery; and (4) image registration for computer vision research

    Restauration d'images en IRM anatomique pour l'étude préclinique des marqueurs du vieillissement cérébral

    Get PDF
    Les maladies neurovasculaires et neurodégénératives liées à l'âge sont en forte augmentation. Alors que ces changements pathologiques montrent des effets sur le cerveau avant l'apparition de symptômes cliniques, une meilleure compréhension du processus de vieillissement normal du cerveau aidera à distinguer l'impact des pathologies connues sur la structure régionale du cerveau. En outre, la connaissance des schémas de rétrécissement du cerveau dans le vieillissement normal pourrait conduire à une meilleure compréhension de ses causes et peut-être à des interventions réduisant la perte de fonctions cérébrales associée à l'atrophie cérébrale. Par conséquent, ce projet de thèse vise à détecter les biomarqueurs du vieillissement normal et pathologique du cerveau dans un modèle de primate non humain, le singe marmouset (Callithrix Jacchus), qui possède des caractéristiques anatomiques plus proches de celles des humains que de celles des rongeurs. Cependant, les changements structurels (par exemple, de volumes, d'épaisseur corticale) qui peuvent se produire au cours de leur vie adulte peuvent être minimes à l'échelle de l'observation. Dans ce contexte, il est essentiel de disposer de techniques d'observation offrant un contraste et une résolution spatiale suffisamment élevés et permettant des évaluations détaillées des changements morphométriques du cerveau associé au vieillissement. Cependant, l'imagerie de petits cerveaux dans une plateforme IRM 3T dédiée à l'homme est une tâche difficile car la résolution spatiale et le contraste obtenus sont insuffisants par rapport à la taille des structures anatomiques observées et à l'échelle des modifications attendues. Cette thèse vise à développer des méthodes de restauration d'image pour les images IRM précliniques qui amélioreront la robustesse des algorithmes de segmentation. L'amélioration de la résolution spatiale des images à un rapport signal/bruit constant limitera les effets de volume partiel dans les voxels situés à la frontière entre deux structures et permettra une meilleure segmentation tout en augmentant la reproductibilité des résultats. Cette étape d'imagerie computationnelle est cruciale pour une analyse morphométrique longitudinale fiable basée sur les voxels et l'identification de marqueurs anatomiques du vieillissement cérébral en suivant les changements de volume dans la matière grise, la matière blanche et le liquide cérébral.Age-related neurovascular and neurodegenerative diseases are increasing significantly. While such pathological changes show effects on the brain before clinical symptoms appear, a better understanding of the normal aging brain process will help distinguish known pathologies' impact on regional brain structure. Furthermore, knowledge of the patterns of brain shrinkage in normal aging could lead to a better understanding of its causes and perhaps to interventions reducing the loss of brain functions. Therefore, this thesis project aims to detect normal and pathological brain aging biomarkers in a non-human primate model, the marmoset monkey (Callithrix Jacchus) which possesses anatomical characteristics more similar to humans than rodents. However, structural changes (e.g., volumes, cortical thickness) that may occur during their adult life may be minimal with respect to the scale of observation. In this context, it is essential to have observation techniques that offer sufficiently high contrast and spatial resolution and allow detailed assessments of the morphometric brain changes associated with aging. However, imaging small brains in a 3T MRI platform dedicated to humans is a challenging task because the spatial resolution and the contrast obtained are insufficient compared to the size of the anatomical structures observed and the scale of the xpected changes with age. This thesis aims to develop image restoration methods for preclinical MR images that will improve the robustness of the segmentation algorithms. Improving the resolution of the images at a constant signal-to-noise ratio will limit the effects of partial volume in voxels located at the border between two structures and allow a better segmentation while increasing the results' reproducibility. This computational imaging step is crucial for a reliable longitudinal voxel-based morphometric analysis and for the identification of anatomical markers of brain aging by following the volume changes in gray matter, white matter and cerebrospinal fluid
    • …
    corecore