541 research outputs found

    Regularized pointwise map recovery from functional correspondence

    Get PDF
    The concept of using functional maps for representing dense correspondences between deformable shapes has proven to be extremely effective in many applications. However, despite the impact of this framework, the problem of recovering the point-to-point correspondence from a given functional map has received surprisingly little interest. In this paper, we analyse the aforementioned problem and propose a novel method for reconstructing pointwise correspondences from a given functional map. The proposed algorithm phrases the matching problem as a regularized alignment problem of the spectral embeddings of the two shapes. Opposed to established methods, our approach does not require the input shapes to be nearly-isometric, and easily extends to recovering the point-to-point correspondence in part-to-whole shape matching problems. Our numerical experiments demonstrate that the proposed approach leads to a significant improvement in accuracy in several challenging cases

    Arbitrary Order Total Variation for Deformable Image Registration

    Get PDF
    In this work, we investigate image registration in a variational framework and focus on regularization generality and solver efficiency. We first propose a variational model combining the state-of-the-art sum of absolute differences (SAD) and a new arbitrary order total variation regularization term. The main advantage is that this variational model preserves discontinuities in the resultant deformation while being robust to outlier noise. It is however non-trivial to optimize the model due to its non-convexity, non-differentiabilities, and generality in the derivative order. To tackle these, we propose to first apply linearization to the model to formulate a convex objective function and then break down the resultant convex optimization into several point-wise, closed-form subproblems using a fast, over-relaxed alternating direction method of multipliers (ADMM). With this proposed algorithm, we show that solving higher-order variational formulations is similar to solving their lower-order counterparts. Extensive experiments show that our ADMM is significantly more efficient than both the subgradient and primal-dual algorithms particularly when higher-order derivatives are used, and that our new models outperform state-of-the-art methods based on deep learning and free-form deformation. Our code implemented in both Matlab and Pytorch is publicly available at https://github.com/j-duan/AOTV

    Multiframe Temporal Estimation of Cardiac Nonrigid Motion

    Get PDF
    A robust, flexible system for tracking the point to point nonrigid motion of the left ventricular (LV) endocardial wall in image sequences has been developed. This system is unique in its ability to model motion trajectories across multiple frames. The foundation of this system is an adaptive transversal filter based on the recursive least-squares algorithm. This filter facilitates the integration of models for periodicity and proximal smoothness as appropriate using a contour-based description of the object’s boundaries. A set of correspondences between contours and an associated set of correspondence quality measures comprise the input to the system. Frame-to-frame relationships from two different frames of reference are derived and analyzed using synthetic and actual images. Two multiframe temporal models, both based on a sum of sinusoids, are derived. Illustrative examples of the system’s output are presented for quantitative analysis. Validation of the system is performed by comparing computed trajectory estimates with the trajectories of physical markers implanted in the LV wall. Sample case studies of marker trajectory comparisons are presented. Ensemble statistics from comparisons with 15 marker trajectories are acquired and analyzed. A multiframe temporal model without spatial periodicity constraints was determined to provide excellent performance with the least computational cost. A multiframe spatiotemporal model provided the best performance based on statistical standard deviation, although at significant computational expense.National Heart, Lung, and Blood InstituteAir Force of Scientific ResearchNational Science FoundationOffice of Naval ResearchR01HL44803F49620-99-1-0481F49620-99-1-0067MIP-9615590N00014-98-1-054

    Proceedings of the FEniCS Conference 2017

    Get PDF
    Proceedings of the FEniCS Conference 2017 that took place 12-14 June 2017 at the University of Luxembourg, Luxembourg

    Model-Based Shape and Motion Analysis: Left Ventricle of a Heart

    Get PDF
    The accurate and clinically useful estimation of the shape, motion, and deformation of the left ventricle of a heart (LV) is an important yet open research problem. Recently, computer vision techniques for reconstructing the 3-D shape and motion of the LV have been developed. The main drawback of these techniques, however, is that their models are formulated in terms of either too many local parameters that require non-trivial processing to be useful for close to real time diagnosis, or too few parameters to offer an adequate approximation to the LV motion. To address the problem, we present a new class of volumetric primitives for a compact and accurate LV shape representation in which model parameters are functions. Lagrangian dynamics are employed to convert geometric models into dynamic models that can deform according to the forces manifested in the data points. It is thus possible to make a precise estimation of the deformation of the LV shape endocardial, epicardial and anywhere in between with a small number of intuitive parameter functions. We believe that the proposed technique has a wide range of potential applications. In this thesis, we demonstrate the possibility by applying it to the 3-D LV shape and motion characterization from magnetic tagging data (MRI-SPAMM). We show that the results of our experiments with normal and abnormal heart data enable us to quantitatively verify the physicians\u27 qualitative conception of the left ventricular wall motion

    -Norm Regularization in Volumetric Imaging of Cardiac Current Sources

    Get PDF
    Advances in computer vision have substantially improved our ability to analyze the structure and mechanics of the heart. In comparison, our ability to observe and analyze cardiac electrical activities is much limited. The progress to computationally reconstruct cardiac current sources from noninvasive voltage data sensed on the body surface has been hindered by the ill-posedness and the lack of a unique solution of the reconstruction problem. Common L2- and L1-norm regularizations tend to produce a solution that is either too diffused or too scattered to reflect the complex spatial structure of current source distribution in the heart. In this work, we propose a general regularization with Lp-norm () constraint to bridge the gap and balance between an overly smeared and overly focal solution in cardiac source reconstruction. In a set of phantom experiments, we demonstrate the superiority of the proposed Lp-norm method over its L1 and L2 counterparts in imaging cardiac current sources with increasing extents. Through computer-simulated and real-data experiments, we further demonstrate the feasibility of the proposed method in imaging the complex structure of excitation wavefront, as well as current sources distributed along the postinfarction scar border. This ability to preserve the spatial structure of source distribution is important for revealing the potential disruption to the normal heart excitation

    A Decoupled 3D Facial Shape Model by Adversarial Training

    Get PDF
    Data-driven generative 3D face models are used to compactly encode facial shape data into meaningful parametric representations. A desirable property of these models is their ability to effectively decouple natural sources of variation, in particular identity and expression. While factorized representations have been proposed for that purpose, they are still limited in the variability they can capture and may present modeling artifacts when applied to tasks such as expression transfer. In this work, we explore a new direction with Generative Adversarial Networks and show that they contribute to better face modeling performances, especially in decoupling natural factors, while also achieving more diverse samples. To train the model we introduce a novel architecture that combines a 3D generator with a 2D discriminator that leverages conventional CNNs, where the two components are bridged by a geometry mapping layer. We further present a training scheme, based on auxiliary classifiers, to explicitly disentangle identity and expression attributes. Through quantitative and qualitative results on standard face datasets, we illustrate the benefits of our model and demonstrate that it outperforms competing state of the art methods in terms of decoupling and diversity.Comment: camera-ready version for ICCV'1

    Scalable Machine Learning Methods for Massive Biomedical Data Analysis.

    Full text link
    Modern data acquisition techniques have enabled biomedical researchers to collect and analyze datasets of substantial size and complexity. The massive size of these datasets allows us to comprehensively study the biological system of interest at an unprecedented level of detail, which may lead to the discovery of clinically relevant biomarkers. Nonetheless, the dimensionality of these datasets presents critical computational and statistical challenges, as traditional statistical methods break down when the number of predictors dominates the number of observations, a setting frequently encountered in biomedical data analysis. This difficulty is compounded by the fact that biological data tend to be noisy and often possess complex correlation patterns among the predictors. The central goal of this dissertation is to develop a computationally tractable machine learning framework that allows us to extract scientifically meaningful information from these massive and highly complex biomedical datasets. We motivate the scope of our study by considering two important problems with clinical relevance: (1) uncertainty analysis for biomedical image registration, and (2) psychiatric disease prediction based on functional connectomes, which are high dimensional correlation maps generated from resting state functional MRI.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111354/1/takanori_1.pd
    • …
    corecore