11 research outputs found

    Accelerating ADMM for efficient simulation and optimization

    Get PDF
    The alternating direction method of multipliers (ADMM) is a popular approach for solving optimization problems that are potentially non-smooth and with hard constraints. It has been applied to various computer graphics applications, including physical simulation, geometry processing, and image processing. However, ADMM can take a long time to converge to a solution of high accuracy. Moreover, many computer graphics tasks involve non-convex optimization, and there is often no convergence guarantee for ADMM on such problems since it was originally designed for convex optimization. In this paper, we propose a method to speed up ADMM using Anderson acceleration, an established technique for accelerating fixed-point iterations. We show that in the general case, ADMM is a fixed-point iteration of the second primal variable and the dual variable, and Anderson acceleration can be directly applied. Additionally, when the problem has a separable target function and satisfies certain conditions, ADMM becomes a fixed-point iteration of only one variable, which further reduces the computational overhead of Anderson acceleration. Moreover, we analyze a particular non-convex problem structure that is common in computer graphics, and prove the convergence of ADMM on such problems under mild assumptions. We apply our acceleration technique on a variety of optimization problems in computer graphics, with notable improvement on their convergence speed

    A short report on preconditioned Anderson acceleration method

    Full text link
    In this report, we present a versatile and efficient preconditioned Anderson acceleration (PAA) method for fixed-point iterations. The proposed framework offers flexibility in balancing convergence rates (linear, super-linear, or quadratic) and computational costs related to the Jacobian matrix. Our approach recovers various fixed-point iteration techniques, including Picard, Newton, and quasi-Newton iterations. The PAA method can be interpreted as employing Anderson acceleration (AA) as its own preconditioner or as an accelerator for quasi-Newton methods when their convergence is insufficient. Adaptable to a wide range of problems with differing degrees of nonlinearity and complexity, the method achieves improved convergence rates and robustness by incorporating suitable preconditioners. We test multiple preconditioning strategies on various problems and investigate a delayed update strategy for preconditioners to further reduce the computational costs

    Solving variational inequalities and cone complementarity problems in nonsmooth dynamics using the alternating direction method of multipliers

    Get PDF
    This work presents a numerical method for the solution of variational inequalities arising in nonsmooth flexible multibody problems that involve set-valued forces. For the special case of hard frictional contacts, the method solves a second order cone complementarity problem. We ground our algorithm on the Alternating Direction Method of Multipliers (ADMM), an efficient and robust optimization method that draws on few computational primitives. In order to improve computational performance, we reformulated the original ADMM scheme in order to exploit the sparsity of constraint jacobians and we added optimizations such as warm starting and adaptive step scaling. The proposed method can be used in scenarios that pose major difficulties to other methods available in literature for complementarity in contact dynamics, namely when using very stiff finite elements and when simulating articulated mechanisms with odd mass ratios. The method can have applications in the fields of robotics, vehicle dynamics, virtual reality, and multiphysics simulation in general

    DeepSplit: Scalable Verification of Deep Neural Networks via Operator Splitting

    Full text link
    Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non-convex optimization problem, for which several past works have proposed convex relaxations as a promising alternative. However, even for reasonably-sized neural networks, these relaxations are not tractable, and so must be replaced by even weaker relaxations in practice. In this work, we propose a novel operator splitting method that can directly solve a convex relaxation of the problem to high accuracy, by splitting it into smaller sub-problems that often have analytical solutions. The method is modular and scales to problem instances that were previously impossible to solve exactly due to their size. Furthermore, the solver operations are amenable to fast parallelization with GPU acceleration. We demonstrate our method in obtaining tighter bounds on the worst-case performance of large convolutional networks in image classification and reinforcement learning settings

    Anderson‐accelerated polarization schemes for fast Fourier transform‐based computational homogenization

    Get PDF
    Classical solution methods in fast Fourier transform‐based computational micromechanics operate on, either, compatible strain fields or equilibrated stress fields. By contrast, polarization schemes are primal‐dual methods whose iterates are neither compatible nor equilibrated. Recently, it was demonstrated that polarization schemes may outperform the classical methods. Unfortunately, their computational power critically depends on a judicious choice of numerical parameters. In this work, we investigate the extension of polarization methods by Anderson acceleration and demonstrate that this combination leads to robust and fast general‐purpose solvers for computational micromechanics. We discuss the (theoretically) optimum parameter choice for polarization methods, describe how Anderson acceleration fits into the picture, and exhibit the characteristics of the newly designed methods for problems of industrial scale and interest
    corecore