3,186 research outputs found

    Modelling mitral valvular dynamics–current trend and future directions

    Get PDF
    Dysfunction of mitral valve causes morbidity and premature mortality and remains a leading medical problem worldwide. Computational modelling aims to understand the biomechanics of human mitral valve and could lead to the development of new treatment, prevention and diagnosis of mitral valve diseases. Compared with the aortic valve, the mitral valve has been much less studied owing to its highly complex structure and strong interaction with the blood flow and the ventricles. However, the interest in mitral valve modelling is growing, and the sophistication level is increasing with the advanced development of computational technology and imaging tools. This review summarises the state-of-the-art modelling of the mitral valve, including static and dynamics models, models with fluid-structure interaction, and models with the left ventricle interaction. Challenges and future directions are also discussed

    Max-Margin Works while Large Margin Fails: Generalization without Uniform Convergence

    Full text link
    A major challenge in modern machine learning is theoretically understanding the generalization properties of overparameterized models. Many existing tools rely on uniform convergence (UC), a property that, when it holds, guarantees that the test loss will be close to the training loss, uniformly over a class of candidate models. Nagarajan and Kolter (2019) show that in certain simple linear and neural-network settings, any uniform convergence bound will be vacuous, leaving open the question of how to prove generalization in settings where UC fails. Our main contribution is proving novel generalization bounds in two such settings, one linear, and one non-linear. We study the linear classification setting of Nagarajan and Kolter, and a quadratic ground truth function learned via a two-layer neural network in the non-linear regime. We prove a new type of margin bound showing that above a certain signal-to-noise threshold, any near-max-margin classifier will achieve almost no test loss in these two settings. Our results show that near-max-margin is important: while any model that achieves at least a (1−ϵ)(1 - \epsilon)-fraction of the max-margin generalizes well, a classifier achieving half of the max-margin may fail terribly. Building on the impossibility results of Nagarajan and Kolter, under slightly stronger assumptions, we show that one-sided UC bounds and classical margin bounds will fail on near-max-margin classifiers. Our analysis provides insight on why memorization can coexist with generalization: we show that in this challenging regime where generalization occurs but UC fails, near-max-margin classifiers simultaneously contain some generalizable components and some overfitting components that memorize the data. The presence of the overfitting components is enough to preclude UC, but the near-extremal margin guarantees that sufficient generalizable components are present
    • …
    corecore