19 research outputs found

    Simultaneous MAP estimation of inhomogeneity and segmentation of brain tissues from MR images

    Get PDF
    Intrascan and interscan intensity inhomogeneities have been identified as a common source of making many advanced segmentation techniques fail to produce satisfactory results in separating brains tissues from multi-spectral magnetic resonance (MR) images. A common solution is to correct the inhomogeneity before applying the segmentation techniques. This paper presents a method that is able to achieve simultaneous semi-supervised MAP (maximum a-posterior probability) estimation of the inhomogeneity field and segmentation of brain tissues, where the inhomogeneity is parameterized. Our method can incorporate any available incomplete training data and their contribution can be controlled in a flexible manner and therefore the segmentation of the brain tissues can be optimised. Experiments on both simulated and real MR images have demonstrated that the proposed method estimated the inhomogeneity field accurately and improved the segmentation

    Probabilistic Atlas and Geometric Variability Estimation to Drive Tissue Segmentation

    Get PDF
    International audienceComputerized anatomical atlases play an important role in medical image analysis. While an atlas usually refers to a standard or mean image also called template, that presumably represents well a given population, it is not enough to characterize the observed population in detail. A template image should be learned jointly with the geometric variability of the shapes represented in the observations. These two quantities will in the sequel form the atlas of the corresponding population. The geometric variability is modelled as deformations of the template image so that it fits the observations. In this paper, we provide a detailed analysis of a new generative statistical model based on dense deformable templates that represents several tissue types observed in medical images. Our atlas contains both an estimation of probability maps of each tissue (called class) and the deformation metric. We use a stochastic algorithm for the estimation of the probabilistic atlas given a dataset. This atlas is then used for atlas-based segmentation method to segment the new images. Experiments are shown on brain T1 MRI datasets

    SEMANTIC IMAGE SEGMENTATION VIA A DENSE PARALLEL NETWORK

    Get PDF
    Image segmentation has been an important area of study in computer vision. Image segmentation is a challenging task, since it involves pixel-wise annotation, i.e. labeling each pixel according to the class to which it belongs. In image classification task, the goal is to predict to which class an entire image belongs. Thus, there is more focus on the abstract features extracted by Convolutional Neural Networks (CNNs), with less emphasis on the spatial information. In image segmentation task, on the other hand, the abstract information and spatial information are needed at the same time. One class of work in image segmentation focuses on ``recovering” the high-resolution features from the low resolution ones. This type of network has an encoder-decoder structure, and spatial information is recovered by feeding the decoder part of the model with previous high-resolution features through skip connections. Overall, these strategies involving skip connections try to propagate features to deeper layers. The second class of work, on the other hand, focuses on ``maintaining high resolution features throughout the process. In this thesis, we first review the related work on image segmentation and then introduce two new models, namely Unet-Laplacian and Dense Parallel Network (DensePN). The Unet-Laplacian is a series CNN model, incorporating a Laplacian filter branch. This new branch performs Laplacian filter operation on the input RGB image, and feeds the output to the decoder. Experiments results show that, the output of the Unet-Laplacian captures more of the ground truth mask, and eliminates some of the false positives. We then describe the proposed DensePN, which was designed to find a good balance between extracting features through multiple layers and keeping spatial information. DensePN allows not only keeping high-resolution feature maps but also feature reuse at deeper layers to solve the image segmentation problem. We have designed the Dense Parallel Network based on three main observations that we have gained from our initial trials and preliminary studies. First, maintaining a high resolution feature map provides good performance. Second, feature reuse is very efficient, and allows having deeper networks. Third, having a parallel structure can provide better information flow. Experimental results on the CamVid dataset show that the proposed DensePN (with 1.1M parameters) provides a better performance than FCDense56 (with 1.5M parameters) by having less parameters at the same time

    Enhanced phase congruency feature-based image registration for multimodal remote sensing imagery

    Get PDF
    Multimodal image registration is an essential image processing task in remote sensing. Basically, multimodal image registration searches for optimal alignment between images captured by different sensors for the same scene to provide better visualization and more informative images. Manual image registration is a tedious task and requires more effort, hence developing an automated image registration is very crucial to provide a faster and reliable solution. However, image registration faces many challenges from the nature of remote sensing image, the environment, and the technical shortcoming of the current methods that cause three issues, namely intensive processing power, local intensity variation, and rotational distortion. Since not all image details are significant, relying on the salient features will be more efficient in terms of processing power. Thus, the feature-based registration method was adopted as an efficient method to avoid intensive processing. The proposed method resolves rotation distortion issue using Oriented FAST and Rotated BRIEF (ORB) to produce invariant rotation features. However, since it is not intensity invariant, it cannot support multimodal data. To overcome the intensity variations issue, Phase Congruence (PC) was integrated with ORB to introduce ORB-PC feature extraction to generate feature invariance to rotation distortion and local intensity variation. However, the solution is not complete since the ORB-PC matching rate is below the expectation. Enhanced ORB-PC was proposed to solve the matching issue by modifying the feature descriptor. While better feature matches were achieved, a high number of outliers from multimodal data makes the common outlier removal methods unsuccessful. Therefore, the Normalized Barycentric Coordinate System (NBCS) outlier removal was utilized to find precise matches even with a high number of outliers. The experiments were conducted to verify the registration qualitatively and quantitatively. The qualitative experiment shows the proposed method has a broader and better features distribution, while the quantitative evaluation indicates improved performance in terms of registration accuracy by 18% compared to the related works

    Image processing methods for human brain connectivity analysis from in-vivo diffusion MRI

    Get PDF
    In this PhD Thesis proposal, the principles of diffusion MRI (dMRI) in its application to the human brain mapping of connectivity are reviewed. The background section covers the fundamentals of dMRI, with special focus on those related to the distortions caused by susceptibility inhomogeneity across tissues. Also, a deep survey of available correction methodologies for this common artifact of dMRI is presented. Two methodological approaches to improved correction are introduced. Finally, the PhD proposal describes its objectives, the research plan, and the necessary resources

    Mouse cardiac MRI

    Get PDF
    De laatste decennia is de interesse in het afbeelden van het hart in de levende muis enorm toegenomen. Deze interesse wordt voor een groot gedeelte gedragen door nieuwe ontwikkelingen in de gentechnologie en moleculaire biologie. Doel van het promotieonderzoek was een toolbox te ontwikkelen van verschillende MRI- (Magnetische Resonantie Imaging) en analysetechnieken, als ondersteuning van het huidige en toekomstige onderzoek op het gebied van muizenharten. Het is met een MRI scanner moeilijk het muizenhart goed af te beelden vanwege de geringe afmetingen van het muizenhart en de hoge frequentie waarmee het muizenhart klopt. Om een goede afbeelding van het muizenhart te krijgen is het noodzakelijk gebruik te maken van een opstelling welke zorgt dat de verschillende fysiologische parameters van de muizen gemeten en gecontroleerd kunnen worden gedurende de acquisitie. Verder worden grote bewegingsfouten voorkomen door de acquisitie van het tomografische beeld te synchroniseren met de cyclische beweging van het hart. Deze synchronisatie wordt normaal gerealiseerd door het begin van de hartcyclus te bepalen aan de hand van een elektrocardiogram en te bepalen wanneer de muis geen ademhalingsbeweging uitvoert. In dit proefschrift wordt een ’draadloze’ techniek beschreven waarbij het meten van de fase van de hartcyclus en de ademhalingscyclus bepaald wordt vanuit het magnetische resonantiesignaal zelf. Het asynchroon uitvoeren van de acquisitie, en deze na afloop van de meting te synchroniseren met de hartslag, heeft als bijkomend voordeel dat de sterkte van het magnetische resonantiesignaal hetzelfde blijft gedurende de volledige acquisitie. Dit zorgt ervoor dat contrastverschillen in het tomografische beeld gelijk blijven en niet afhankelijk zijn van de conditie van de muis. De constante signaalsterkte gaat ten koste van een lagere signaal-ruis-verhouding. De verlaging van de signaal-ruis-verhouding heeft er toe geleid dat de volumes van het werkende muizenhart enigszins verschillend werden beoordeeld. MRI is niet alleen in staat om tomografische afbeeldingen te maken, maar kan ook fysiologische parameters te kwantificeren. Het grote nadeel van een tomografische afbeeldingtechniek is dat het alleen kwalitatieve informatie geeft, zoals de morfologie. Kwantitatieve informatie ,bijvoorbeeld een volumebepaling, kan alleen verkregen worden door het segmenteren van het muizenhart uit de verkregen tomografische afbeelding. Het seg menteren kost enorm veel tijd wanneer dit met de hand moet worden uitgevoerd. Bovendien kan handmatige segmentatie onnauwkeurig worden als het niet door gekwalificeerde personen wordt uitgevoerd. We laten in dit proefschrift zien dat een automatische segmentatie methode een even grote fout maakt als de fout die gemaakt wordt wanneer twee verschillende gekwalificeerde personen dezelfde segmentaties met de hand uitvoeren binnen een muizengroep. De synchronisatietechniek zoals hierboven beschreven, gebaseerd op het magnetische resonantiesignaal, werd ook toegepast om de volledige bewegingscyclus van de aorta af te beelden in twee verschillende muisgenotypes (Smtn-B+/+ and Smtn-B-/-). Het genotype Smtn-B-/- heeft een afwijkende contractiekracht in de arteri¨en en een hogere gemiddelde arteri¨ele bloeddruk. De aortadiametertoename in het genotype Smtn-B-/- was tweemaal groter gedurende de hartcyclus in vergelijking met muizen van het genotype Smtn-B+/+. Verder hadden de muizenharten van de Smtn-B-/- genotype een grotere linkerventrikelmassa en een hogere ejectiefractie. Deze studie, waarin twee verschillende muizengroepen met elkaar werden vergeleken, laat zien dat de MRI-techniek zeer kleine verschillen in fysiologische parameters van het muizenhart kan detecteren. MRI wordt naast phenotypering ook gebruikt om muizenharten met een infarct te karakteriseren. Verschillende publicaties vergelijken fysiologische parameters van muizenharten met infarcten gemeten tussen MRI enerzijds en anderzijds: computertomografie-, echocardiografie- of druk-volumemeting met een katheter. Nieuwe ontwikkelingen op het gebied van nucleaire scantechnieken ,zoals bijvoorbeeld positron emissie tomografie (PET), maken het mogelijk dat deze nucleaire technieken ook muizenharten kunnen karakteriseren. Het grote voordeel van deze nucleaire scantechnieken is hun hoge gevoeligheid voor radioactieve contrastmiddelen, die ervoor zorgt dat men bijna geen toxicologische reacties hoeft te verwachten. In een gecombineerd experiment werden fysiologische parameters vergeleken tussen MRI- en PET-metingen. Ook werd een vergelijking gemaakt tussen de infarctgroottes zoals bepaald met behulp van een MRI-contrastmiddel en een PETcontrastmiddel. Er werd een goede correlatie gevonden tussen beide imagingtechnieken m.b.t. de fysiologische parameters: einddiastole volume, eindsystole volume en ejectiefractie. Aanzienlijke verschillen werden gemeten in de infarctgrootte bepaald uit de MRI- en PET-beelden. Verder werden er hoge correlaties gevonden in de MRI-data tussen drie verschillende infarctgroottebepalingen en de ejectiefracties. Het afbeelden van ziekteprocessen op het cellulaire en moleculaire niveau met MRI is mogelijk door gebruik te maken van krachtige en specifieke contrastmiddelen. MRI heeft een relatief lage detectiegevoeligheid voor contrastmiddelen. Om de gevoeligheid te vergroten, werd er een studie uitgevoerd naar een snelle MRI-sequentie, bekend onder de naam ’Rephased-FFE’. Deze sequentie liet een 6-maal hogere detectiegevoeligheid zien voor het contrastmiddel Gd-DTPA in een fantoomexperiment met een conventionele humane 1.5 Tesla MRI-scanner

    Reasoning with Uncertainty in Deep Learning for Safer Medical Image Computing

    Get PDF
    Deep learning is now ubiquitous in the research field of medical image computing. As such technologies progress towards clinical translation, the question of safety becomes critical. Once deployed, machine learning systems unavoidably face situations where the correct decision or prediction is ambiguous. However, the current methods disproportionately rely on deterministic algorithms, lacking a mechanism to represent and manipulate uncertainty. In safety-critical applications such as medical imaging, reasoning under uncertainty is crucial for developing a reliable decision making system. Probabilistic machine learning provides a natural framework to quantify the degree of uncertainty over different variables of interest, be it the prediction, the model parameters and structures, or the underlying data (images and labels). Probability distributions are used to represent all the uncertain unobserved quantities in a model and how they relate to the data, and probability theory is used as a language to compute and manipulate these distributions. In this thesis, we explore probabilistic modelling as a framework to integrate uncertainty information into deep learning models, and demonstrate its utility in various high-dimensional medical imaging applications. In the process, we make several fundamental enhancements to current methods. We categorise our contributions into three groups according to the types of uncertainties being modelled: (i) predictive; (ii) structural and (iii) human uncertainty. Firstly, we discuss the importance of quantifying predictive uncertainty and understanding its sources for developing a risk-averse and transparent medical image enhancement application. We demonstrate how a measure of predictive uncertainty can be used as a proxy for the predictive accuracy in the absence of ground-truths. Furthermore, assuming the structure of the model is flexible enough for the task, we introduce a way to decompose the predictive uncertainty into its orthogonal sources i.e. aleatoric and parameter uncertainty. We show the potential utility of such decoupling in providing a quantitative “explanations” into the model performance. Secondly, we introduce our recent attempts at learning model structures directly from data. One work proposes a method based on variational inference to learn a posterior distribution over connectivity structures within a neural network architecture for multi-task learning, and share some preliminary results in the MR-only radiotherapy planning application. Another work explores how the training algorithm of decision trees could be extended to grow the architecture of a neural network to adapt to the given availability of data and the complexity of the task. Lastly, we develop methods to model the “measurement noise” (e.g., biases and skill levels) of human annotators, and integrate this information into the learning process of the neural network classifier. In particular, we show that explicitly modelling the uncertainty involved in the annotation process not only leads to an improvement in robustness to label noise, but also yields useful insights into the patterns of errors that characterise individual experts

    Variational Methods in Shape Space

    Get PDF
    This dissertation deals with the application of variational methods in spaces of geometric shapes. In particular, the treated topics include shape averaging, principal component analysis in shape space, computation of geodesic paths in shape space, as well as shape optimisation. Chapter 1 provides a brief overview over the employed models of shape space. Geometric shapes are identified with two- or three-dimensional, deformable objects. Deformations will be described via physical models; in particular, the objects will be interpreted as consisting of either a hyperelastic solid or a viscous liquid material. Furthermore, the description of shapes via phase fields or level sets is briefly introduced. Chapter 2 reviews different and related approaches to shape space modelling. References to related topics in image segmentation and registration are also provided. Finally, the relevant shape optimisation literature is introduced. Chapter 3 recapitulates the employed concepts from continuum mechanics and phase field modelling and states basic theoretical results needed for the later analysis. Chapter 4 addresses the computation of shape averages, based on a hyperelastic notion of shape dissimilarity: The dissimilarity between two shapes is measured as the minimum deformation energy required to deform the first into the second shape. A corresponding phase-field model is introduced, analysed, and finally implemented numerically via finite elements. A principal component analysis of shapes, which is consistent with the previously introduced average, is considered in Chapter 5. Elastic boundary stresses on the average shape are used as representatives of the input shapes in a linear vector space. On these linear representatives, a standard principal component analysis can be performed, where the employed covariance metric should be properly chosen to depend on the input shapes. Chapter 6 interprets shapes as belonging to objects made of a viscous liquid and correspondingly defines geodesic paths between shapes. The energy of a path is given as the total physical dissipation during the deformation of an object along the path. A rigid body motion invariant time discretisation is achieved by approximating the dissipation along a path segment by the deformation energy of a small solid deformation. The numerical implementation is based on level sets. Chapter 7 is concerned with the optimisation of the geometry and topology of solid structures that are subject to a mechanical load. Given the load configuration, the structure rigidity, its volume, and its surface area shall be optimally balanced. A phase field model is devised and analysed for this purpose. In this context, the use of nonlinear elasticity allows to detect buckling phenomena which would be ignored in linearised elasticity
    corecore