52,932 research outputs found

    Numerical simulation of the stress-strain state of the dental system

    Full text link
    We present mathematical models, computational algorithms and software, which can be used for prediction of results of prosthetic treatment. More interest issue is biomechanics of the periodontal complex because any prosthesis is accompanied by a risk of overloading the supporting elements. Such risk can be avoided by the proper load distribution and prediction of stresses that occur during the use of dentures. We developed the mathematical model of the periodontal complex and its software implementation. This model is based on linear elasticity theory and allows to calculate the stress and strain fields in periodontal ligament and jawbone. The input parameters for the developed model can be divided into two groups. The first group of parameters describes the mechanical properties of periodontal ligament, teeth and jawbone (for example, elasticity of periodontal ligament etc.). The second group characterized the geometric properties of objects: the size of the teeth, their spatial coordinates, the size of periodontal ligament etc. The mechanical properties are the same for almost all, but the input of geometrical data is complicated because of their individual characteristics. In this connection, we develop algorithms and software for processing of images obtained by computed tomography (CT) scanner and for constructing individual digital model of the tooth-periodontal ligament-jawbone system of the patient. Integration of models and algorithms described allows to carry out biomechanical analysis on three-dimensional digital model and to select prosthesis design.Comment: 19 pages, 9 figure

    Articulation-aware Canonical Surface Mapping

    Full text link
    We tackle the tasks of: 1) predicting a Canonical Surface Mapping (CSM) that indicates the mapping from 2D pixels to corresponding points on a canonical template shape, and 2) inferring the articulation and pose of the template corresponding to the input image. While previous approaches rely on keypoint supervision for learning, we present an approach that can learn without such annotations. Our key insight is that these tasks are geometrically related, and we can obtain supervisory signal via enforcing consistency among the predictions. We present results across a diverse set of animal object categories, showing that our method can learn articulation and CSM prediction from image collections using only foreground mask labels for training. We empirically show that allowing articulation helps learn more accurate CSM prediction, and that enforcing the consistency with predicted CSM is similarly critical for learning meaningful articulation.Comment: To appear at CVPR 2020, project page https://nileshkulkarni.github.io/acsm

    Geometric deep learning: going beyond Euclidean data

    Get PDF
    Many scientific fields study data with an underlying structure that is a non-Euclidean space. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions), and are natural targets for machine learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure, and in cases where the invariances of these structures are built into networks used to model them. Geometric deep learning is an umbrella term for emerging techniques attempting to generalize (structured) deep neural models to non-Euclidean domains such as graphs and manifolds. The purpose of this paper is to overview different examples of geometric deep learning problems and present available solutions, key difficulties, applications, and future research directions in this nascent field

    A Rigorous Free-form Lens Model of Abell 2744 to Meet the Hubble Frontier Fields Challenge

    Get PDF
    Hubble Frontier Fields (HFF) imaging of the most powerful lensing clusters provides access to the most magnified distant galaxies. The challenge is to construct lens models capable of describing these complex massive, merging clusters so that individual lensed systems can be reliably identified and their intrinsic properties accurately derived. We apply the free-form lensing method (WSLAP+) to A2744, providing a model independent map of the cluster mass, magnification, and geometric distance estimates to multiply-lensed sources. We solve simultaneously for a smooth cluster component on a pixel grid, together with local deflections by the cluster member galaxies. Combining model prediction with photometric redshift measurements, we correct and complete several systems recently claimed, and identify 4 new systems - totalling 65 images of 21 systems spanning a redshift range of 1.4<z<9.8. The reconstructed mass shows small enhancements in the directions where significant amounts of hot plasma can be seen in X-ray. We compare photometric redshifts with "geometric redshifts", finding a high level of self-consistency. We find excellent agreement between predicted and observed fluxes - with a best-fit slope of 0.999+-0.013 and an RMS of ~0.25 mag, demonstrating that our magnification correction of the lensed background galaxies is very reliable. Intriguingly, few multiply-lensed galaxies are detected beyond z~7.0, despite the high magnification and the limiting redshift of z~11.5 permitted by the HFF filters. With the additional HFF clusters we can better examine the plausibility of any pronounced high-z deficit, with potentially important implications for the reionization epoch and the nature of dark matter.Comment: Accepted for publication in ApJ with newly identified lensed images in complete HFF dat
    • …
    corecore