11 research outputs found

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    LIPIcs, Volume 274, ESA 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 274, ESA 2023, Complete Volum

    Error Estimation and Adaptive Refinement of Finite Element Thin Plate Spline

    Get PDF
    The thin plate spline smoother is a data fitting and smoothing technique that captures important patterns of potentially noisy data. However, it is computationally expensive for large data sets. The finite element thin plate spline smoother (TPSFEM) combines the thin plate spline smoother and finite element surface fitting to efficiently interpolate large data sets. When the TPSFEM uses uniform finite element grids, it may require a fine grid to achieve the desired accuracy. Adaptive refinement uses error indicators to identify sensitive regions and adapts the precision of the solution dynamically, which reduces the computational cost to achieve the required accuracy. Traditional error indicators were developed for the finite element method to approximate partial differential equations and may not be applicable for the TPSFEM. We examined techniques that may indicate errors for the TPSFEM and adapted four traditional error indicators that use different information to produce efficient adaptive grids. The iterative adaptive refinement process has also been adjusted to handle additional complexities caused by the TPSFEM. The four error indicators presented in this thesis are the auxiliary problem error indicator, recovery-based error indicator, norm-based error indicator and residual-based error indicator. The auxiliary problem error indicator approximates the error by solving auxiliary problems to evaluate approximation quality. The recovery-based error indicator calculates the error by post-processing discontinuous gradients of the TPSFEM. The norm-based error indicator uses an error bound on the interpolation error to indicate large errors. The residual-based error indicator computes interior element residuals and jumps of gradients across elements to estimate the energy norm of the error. Numerical experiments were conducted to evaluate the error indicators' performance on producing efficient adaptive grids, which are measured by the error versus the number of nodes in the grid. A set of one and two-dimensional model problems with various features are chosen to examine the effectiveness of the error indicators. As opposed to the finite element method, error indicators of the TPSFEM may also be affected by noise, data distribution patterns, data sizes and boundary conditions, which are assessed in the experiments. It is found that adaptive grids are significantly more efficient than uniform grids for two-dimensional model problems with difficulties like peaks and singularities. While the TPSFEM may not recover the original solution in the presence of noise or scarce data, error indicators still produce more efficient grids. We also learned that the difference is less obvious when the data has mostly smooth or oscillatory surfaces. Some error indicators that use data may be affected by data distribution patterns and boundary conditions, but the others are robust and produce stable results. Our error indicators also successfully identify sensitive regions for one-dimensional data sets. Lastly, when errors of the TPSFEM cannot be further reduced due to factors like noise, new stopping criteria terminate the iterative process aptly

    36th International Symposium on Theoretical Aspects of Computer Science: STACS 2019, March 13-16, 2019, Berlin, Germany

    Get PDF

    Studying the rate of convergence of gradient optimisation algorithms via the theory of optimal experimental design

    Get PDF
    The most common class of methods for solving quadratic optimisation problems is the class of gradient algorithms, the most famous of which being the Steepest Descent algorithm. The development of a particular gradient algorithm, the Barzilai-Borwein algorithm, has sparked a lot of research in the area in recent years and many algorithms now exist which have faster rates of convergence than that possessed by the Steepest Descent algorithm. The technology to effectively analyse and compare the asymptotic rates of convergence of gradient algorithms is, however, limited and so it is somewhat unclear from literature as to which algorithms possess the faster rates of convergence. In this thesis methodology is developed to enable better analysis of the asymptotic rates of convergence of gradient algorithms applied to quadratic optimisation problems. This methodology stems from a link with the theory of optimal experimental design. It is established that gradient algorithms can be related to algorithms for constructing optimal experimental designs for linear regression models. Furthermore, the asymptotic rates of convergence of these gradient algorithms can be expressed through the asymptotic behaviour of multiplicative algorithms for constructing optimal experimental designs. The described connection to optimal experimental design has also been used to influence the creation of several new gradient algorithms which would not have otherwise been intuitively thought of. The asymptotic rates of convergence of these algorithms are studied extensively and insight is given as to how some gradient algorithms are able to converge faster than others. It is demonstrated that the worst rates are obtained when the corresponding multiplicative procedure for updating the designs converges to the optimal design. Simulations reveal that the asymptotic rates of convergence of some of these new algorithms compare favourably with those of existing gradient-type algorithms such as the Barzilai-Borwein algorithm.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore