62,443 research outputs found

    Correction, Validation, and Characterization of Motion in Resting-State Functional Magnetic Resonance Images of Pediatric Patients

    Get PDF
    There are many scenarios, for both clinical and research applications, in which we would like to examine a patient's neurodevelopmental status. Generally, neurodevelopmental evaluations can be performed through psychological testing or in-person assessment with a psychologist. However, these approaches are not applicable in all cases, particularly for many pediatric populations. Researchers are beginning to turn to medical imaging approaches for objectively quantifying a patient's neurodevelopmental status. Resting-state functional magnetic resonance images (rs-fMRIs) can be used to study neuronal networks that are active even when a person is not performing a specific task or reacting to particular stimuli. These image sequences are highly sensitive to motion. Techniques have been developed to prevent patients from moving as well as monitor motion during the scan and correct for the patient's movement after the scan. We focus on the first step of retrospective motion correction: volume registration. The purpose of volume registration is to align the contents of all of the image volumes in the image sequence to the contents of a single volume. Traditionally, all image volumes are directly registered to the chosen stationary image volume. However, this approach does not account for significant differences in patient position between the stationary volume and the other volumes in the sequence. We developed a registration framework based on the concept of a directed acyclic graph. We treat the volumes in the sequence as nodes in a graph where pairs of subsequent volumes are connected via directed edges. This perspective allows us to model the relationships between subsequent volumes and account for them during registration. We applied both registration frameworks to a set of simulated images as well as neurological rs-fMRIs from three clinical populations. The clinical populations were preadolescent, neonatal, and fetal subjects who either were healthy or had congenital heart disease (CHD). The original and registered sequences were compared with respect to their local and global motion. The local motion was measured between every pair of image volumes ii and i+1i+1 in each sequence using the framewise displacement (FD) and the derivative of the root mean square variance of the signal (DVARS). The global motion across each sequence was measured by calculating the similarity between every pair of image volumes in each sequence. The local motion parameters were compared to a pair of gold standard usability thresholds to determine how each registration framework impacted the usability of every image volume. Both the local and global motion parameters were used to determine how many sequences had statistically significant differences in their motion distributions before and after registration. Additionally, the local and global metrics of the original sequences were clustered to determine if a computer could identify groups of subjects based on their motion parameters. The registration frameworks had different effects on each age group of subjects. We found that the neonatal subjects contained the least amount of motion, while the fetal subjects contained the most motion. The DAG-based registration was most effective at reducing motion in the fetal images. Our clustering analysis showed that the different age groups have different global motion parameters, though lifespan-level patterns related to CHD status could not be detected

    Retinal Fundus Image Registration via Vascular Structure Graph Matching

    Get PDF
    Motivated by the observation that a retinal fundus image may contain some unique geometric structures within its vascular trees which can be utilized for feature matching, in this paper, we proposed a graph-based registration framework called GM-ICP to align pairwise retinal images. First, the retinal vessels are automatically detected and represented as vascular structure graphs. A graph matching is then performed to find global correspondences between vascular bifurcations. Finally, a revised ICP algorithm incorporating with quadratic transformation model is used at fine level to register vessel shape models. In order to eliminate the incorrect matches from global correspondence set obtained via graph matching, we proposed a structure-based sample consensus (STRUCT-SAC) algorithm. The advantages of our approach are threefold: (1) global optimum solution can be achieved with graph matching; (2) our method is invariant to linear geometric transformations; and (3) heavy local feature descriptors are not required. The effectiveness of our method is demonstrated by the experiments with 48 pairs retinal images collected from clinical patients

    A Combinatorial Solution to Non-Rigid 3D Shape-to-Image Matching

    Get PDF
    We propose a combinatorial solution for the problem of non-rigidly matching a 3D shape to 3D image data. To this end, we model the shape as a triangular mesh and allow each triangle of this mesh to be rigidly transformed to achieve a suitable matching to the image. By penalising the distance and the relative rotation between neighbouring triangles our matching compromises between image and shape information. In this paper, we resolve two major challenges: Firstly, we address the resulting large and NP-hard combinatorial problem with a suitable graph-theoretic approach. Secondly, we propose an efficient discretisation of the unbounded 6-dimensional Lie group SE(3). To our knowledge this is the first combinatorial formulation for non-rigid 3D shape-to-image matching. In contrast to existing local (gradient descent) optimisation methods, we obtain solutions that do not require a good initialisation and that are within a bound of the optimal solution. We evaluate the proposed method on the two problems of non-rigid 3D shape-to-shape and non-rigid 3D shape-to-image registration and demonstrate that it provides promising results.Comment: 10 pages, 7 figure
    corecore