1,072 research outputs found

    Optimization of mesh hierarchies in Multilevel Monte Carlo samplers

    Full text link
    We perform a general optimization of the parameters in the Multilevel Monte Carlo (MLMC) discretization hierarchy based on uniform discretization methods with general approximation orders and computational costs. We optimize hierarchies with geometric and non-geometric sequences of mesh sizes and show that geometric hierarchies, when optimized, are nearly optimal and have the same asymptotic computational complexity as non-geometric optimal hierarchies. We discuss how enforcing constraints on parameters of MLMC hierarchies affects the optimality of these hierarchies. These constraints include an upper and a lower bound on the mesh size or enforcing that the number of samples and the number of discretization elements are integers. We also discuss the optimal tolerance splitting between the bias and the statistical error contributions and its asymptotic behavior. To provide numerical grounds for our theoretical results, we apply these optimized hierarchies together with the Continuation MLMC Algorithm. The first example considers a three-dimensional elliptic partial differential equation with random inputs. Its space discretization is based on continuous piecewise trilinear finite elements and the corresponding linear system is solved by either a direct or an iterative solver. The second example considers a one-dimensional It\^o stochastic differential equation discretized by a Milstein scheme

    Curvature-enhanced Neural Subdivision

    Get PDF
    Subdivision is an important and widely used technique for obtaining dense meshes from coarse control (triangular) meshes for modelling and animation purposes. Most subdivision algorithms use engineered features (subdivisionrules). Recently, neural subdivision successfully applied machine learning to the subdivision of a triangular mesh. It uses a simple neural network to learn an optimal vertex positioning during a subdivision step. We propose an extension to the neural subdivision algorithm that introduces explicit curvature informationinto the network. This makes a larger amount of relevant information accessible which allows the network to yield better results. We demonstrate that this modification yields significant improvement over the original algorithm, in terms of both Hausdorff distance and mean squared error

    Multiscale computational homogenization: review and proposal of a new enhanced-first-order method

    Get PDF
    This is a copy of the author 's final draft version of an article published in the Archives of computational methods in engineering. The final publication is available at Springer via http://dx.doi.org/10.1007/s11831-016-9205-0The continuous increase of computational capacity has encouraged the extensive use of multiscale techniques to simulate the material behaviour on several fields of knowledge. In solid mechanics, the multiscale approaches which consider the macro-scale deformation gradient to obtain the homogenized material behaviour from the micro-scale are called first-order computational homogenization. Following this idea, the second-order FE2 methods incorporate high-order gradients to improve the simulation accuracy. However, to capture the full advantages of these high-order framework the classical boundary value problem (BVP) at the macro-scale must be upgraded to high-order level, which complicates their numerical solution. With the purpose of obtaining the best of both methods i.e. first-order and second-order, in this work an enhanced-first-order computational homogenization is presented. The proposed approach preserves a classical BVP at the macro-scale level but taking into account the high-order gradient of the macro-scale in the micro-scale solution. The developed numerical examples show how the proposed method obtains the expected stress distribution at the micro-scale for states of structural bending loads. Nevertheless, the macro-scale results achieved are the same than the ones obtained with a first-order framework because both approaches share the same macro-scale BVP.Peer ReviewedPostprint (author's final draft
    • …
    corecore