13 research outputs found

    Self-Supervised CSF Inpainting with Synthetic Atrophy for Improved Accuracy Validation of Cortical Surface Analyses

    Full text link
    Accuracy validation of cortical thickness measurement is a difficult problem due to the lack of ground truth data. To address this need, many methods have been developed to synthetically induce gray matter (GM) atrophy in an MRI via deformable registration, creating a set of images with known changes in cortical thickness. However, these methods often cause blurring in atrophied regions, and cannot simulate realistic atrophy within deep sulci where cerebrospinal fluid (CSF) is obscured or absent. In this paper, we present a solution using a self-supervised inpainting model to generate CSF in these regions and create images with more plausible GM/CSF boundaries. Specifically, we introduce a novel, 3D GAN model that incorporates patch-based dropout training, edge map priors, and sinusoidal positional encoding, all of which are established methods previously limited to 2D domains. We show that our framework significantly improves the quality of the resulting synthetic images and is adaptable to unseen data with fine-tuning. We also demonstrate that our resulting dataset can be employed for accuracy validation of cortical segmentation and thickness measurement.Comment: Accepted at Medical Imaging with Deep Learning (MIDL) 202

    Analysis, Segmentation and Prediction of Knee Cartilage using Statistical Shape Models

    Get PDF
    Osteoarthritis (OA) of the knee is one of the leading causes of chronic disability (along with the hip). Due to rising healthcare costs associated with OA, it is important to fully understand the disease and how it progresses in the knee. One symptom of knee OA is the degeneration of cartilage in the articulating knee. The cartilage pad plays a major role in painting the biomechanical picture of the knee. This work attempts to quantify the cartilage thickness of healthy male and female knees using statistical shape models (SSMs) for a deep knee bend activity. Additionally, novel cartilage segmentation from magnetic resonance imaging (MRI) and estimation algorithms from computer tomography (CT) or x-rays are proposed to facilitate the efficient development and accurate analysis of future treatments related to the knee. Cartilage morphology results suggest distinct patterns of wear in varus, valgus, and neutral degenerative knees, and examination of contact regions during the deep knee bend activity further emphasizes these patterns. Segmentation results were achieved that were comparable if not of higher quality than existing state-of-the-art techniques for both femoral and tibial cartilage. Likewise, using the point correspondence properties of SSMs, estimation of articulating cartilage was effective in healthy and degenerative knees. In conclusion, this work provides novel, clinically relevant morphological data to compute segmentation and estimate new data in such a way to potentially contribute to improving results and efficiency in evaluation of the femorotibial cartilage layer

    A Review on Segmentation of Knee Articular Cartilage: from Conventional Methods Towards Deep Learning

    Get PDF
    In this paper, we review the state-of-the-art approaches for knee articular cartilage segmentation from conventional techniques to deep learning (DL) based techniques. Knee articular cartilage segmentation on magnetic resonance (MR) images is of great importance in early diagnosis of osteoarthritis (OA). Besides, segmentation allows estimating the articular cartilage loss rate which is utilised in clinical practice for assessing the disease progression and morphological changes. Topics covered include various image processing algorithms and major features of different segmentation techniques, feature computations and the performance evaluation metrics. This paper is intended to provide researchers with a broad overview of the currently existing methods in the field, as well as to highlight the shortcomings and potential considerations in the application at clinical practice. The survey showed that the state-of-the-art techniques based on DL outperforms the other segmentation methods. The analysis of the existing methods reveals that integration of DL-based algorithms with other traditional model-based approaches have achieved the best results (mean Dice similarity cofficient (DSC) between 85:8% and 90%)

    Automatic Localized Analysis of Longitudinal Cartilage Changes

    Get PDF
    Osteoarthritis (OA) is the most common form of arthritis; it is characterized by the loss of cartilage. Automatic quantitative methods are needed to screen large image databases to assess changes in cartilage morphology. This dissertation presents an automatic analysis method to quantitatively analyze longitudinal cartilage changes from knee magnetic resonance (MR) images. A novel robust automatic cartilage segmentation method is proposed to overcome the limitations of existing cartilage segmentation methods. The dissertation presents a new and general convex three-label segmentation approach to ensure the separation of touching objects, i.e., femoral and tibial cartilage. Anisotropic spatial regularization is introduced to avoid over-regularization by isotropic regularization on thin objects. Temporal regularization is further incorporated to encourage temporally-consistent segmentations across time points for longitudinal data. The state-of-the-art analysis of cartilage changes relies on the subdivision of car- tilage, which is coarse and purely geometric whereas cartilage loss is a local thinning process and exhibits spatial non-uniformity. A novel statistical analysis method is proposed to study localized longitudinal cartilage thickness changes by establishing spatial correspondences across time and between subjects. The method is general and can be applied to other nonuniform morphological changes in other diseases.Doctor of Philosoph

    Automated segmentation and quantitative analysis of the hip joint from magnetic resonance images

    Get PDF

    Contributions of Continuous Max-Flow Theory to Medical Image Processing

    Get PDF
    Discrete graph cuts and continuous max-flow theory have created a paradigm shift in many areas of medical image processing. As previous methods limited themselves to analytically solvable optimization problems or guaranteed only local optimizability to increasingly complex and non-convex functionals, current methods based now rely on describing an optimization problem in a series of general yet simple functionals with a global, but non-analytic, solution algorithms. This has been increasingly spurred on by the availability of these general-purpose algorithms in an open-source context. Thus, graph-cuts and max-flow have changed every aspect of medical image processing from reconstruction to enhancement to segmentation and registration. To wax philosophical, continuous max-flow theory in particular has the potential to bring a high degree of mathematical elegance to the field, bridging the conceptual gap between the discrete and continuous domains in which we describe different imaging problems, properties and processes. In Chapter 1, we use the notion of infinitely dense and infinitely densely connected graphs to transfer between the discrete and continuous domains, which has a certain sense of mathematical pedantry to it, but the resulting variational energy equations have a sense of elegance and charm. As any application of the principle of duality, the variational equations have an enigmatic side that can only be decoded with time and patience. The goal of this thesis is to show the contributions of max-flow theory through image enhancement and segmentation, increasing incorporation of topological considerations and increasing the role played by user knowledge and interactivity. These methods will be rigorously grounded in calculus of variations, guaranteeing fuzzy optimality and providing multiple solution approaches to addressing each individual problem

    Methods for automated analysis of macular OCT data

    Get PDF
    Optical coherence tomography (OCT) is fast becoming one of the most important modalities for imaging the eye. It provides high resolution, cross-sectional images of the retina in three dimensions, distinctly showing its many layers. These layers are critical for normal eye function, and vision loss may occur when they are altered by disease. Specifically, the thickness of individual layers can change over time, thereby making the ability to accurately measure these thicknesses an important part of learning about how different diseases affect the eye. Since manual segmentation of the layers in OCT data is time consuming and tedious, automated methods are necessary to extract layer thicknesses. While a standard set of tools exist on the scanners to automatically segment the retina, the output is often limited, providing measurements restricted to only a few layers. Analysis of longitudinal data is also limited, with scans from the same subject often processed independently and registered using only a single landmark at the fovea. Quantification of other changes in the retina, including the accumulation of fluid, are also generally unavailable using the built-in software. In this thesis, we present four contributions for automatically processing OCT data, specifically for data acquired from the macular region of the retina. First, we present a layer segmentation algorithm to robustly segment the eight visible layers of the retina. Our approach combines the use of a random forest (RF) classifier, which produces boundary probabilities, with a boundary refinement algorithm to find surfaces maximizing the RF probabilities. Second, we present a pair of methods for processing longitudinal data from individual subjects: one combining registration and motion correction, and one for simultaneously segmenting the layers across all scans. Third, we develop a method for segmentation of microcystic macular edema, which appear as small, fluid-filled, cystoid spaces within the retina. Our approach again uses an RF classifier to produce a robust segmentation. Finally, we present the development of macular flatspace (MFS), a computational domain used to put data from different subjects in a common coordinate system where each layer appears flat, thereby simplifying any automated processing. We present two applications of MFS: inhomogeneity correction to normalize the intensities within each layer, and layer segmentation by adapting and simplifying a graph formulation used previously
    corecore