7,128 research outputs found

    On stability of discretizations of the Helmholtz equation (extended version)

    Full text link
    We review the stability properties of several discretizations of the Helmholtz equation at large wavenumbers. For a model problem in a polygon, a complete kk-explicit stability (including kk-explicit stability of the continuous problem) and convergence theory for high order finite element methods is developed. In particular, quasi-optimality is shown for a fixed number of degrees of freedom per wavelength if the mesh size hh and the approximation order pp are selected such that kh/pkh/p is sufficiently small and p=O(logk)p = O(\log k), and, additionally, appropriate mesh refinement is used near the vertices. We also review the stability properties of two classes of numerical schemes that use piecewise solutions of the homogeneous Helmholtz equation, namely, Least Squares methods and Discontinuous Galerkin (DG) methods. The latter includes the Ultra Weak Variational Formulation

    Tsallis-INF: An Optimal Algorithm for Stochastic and Adversarial Bandits

    Full text link
    We derive an algorithm that achieves the optimal (within constants) pseudo-regret in both adversarial and stochastic multi-armed bandits without prior knowledge of the regime and time horizon. The algorithm is based on online mirror descent (OMD) with Tsallis entropy regularization with power α=1/2\alpha=1/2 and reduced-variance loss estimators. More generally, we define an adversarial regime with a self-bounding constraint, which includes stochastic regime, stochastically constrained adversarial regime (Wei and Luo), and stochastic regime with adversarial corruptions (Lykouris et al.) as special cases, and show that the algorithm achieves logarithmic regret guarantee in this regime and all of its special cases simultaneously with the adversarial regret guarantee.} The algorithm also achieves adversarial and stochastic optimality in the utility-based dueling bandit setting. We provide empirical evaluation of the algorithm demonstrating that it significantly outperforms UCB1 and EXP3 in stochastic environments. We also provide examples of adversarial environments, where UCB1 and Thompson Sampling exhibit almost linear regret, whereas our algorithm suffers only logarithmic regret. To the best of our knowledge, this is the first example demonstrating vulnerability of Thompson Sampling in adversarial environments. Last, but not least, we present a general stochastic analysis and a general adversarial analysis of OMD algorithms with Tsallis entropy regularization for α[0,1]\alpha\in[0,1] and explain the reason why α=1/2\alpha=1/2 works best

    Linearized Alternating Direction Method with Parallel Splitting and Adaptive Penalty for Separable Convex Programs in Machine Learning

    Full text link
    Many problems in machine learning and other fields can be (re)for-mulated as linearly constrained separable convex programs. In most of the cases, there are multiple blocks of variables. However, the traditional alternating direction method (ADM) and its linearized version (LADM, obtained by linearizing the quadratic penalty term) are for the two-block case and cannot be naively generalized to solve the multi-block case. So there is great demand on extending the ADM based methods for the multi-block case. In this paper, we propose LADM with parallel splitting and adaptive penalty (LADMPSAP) to solve multi-block separable convex programs efficiently. When all the component objective functions have bounded subgradients, we obtain convergence results that are stronger than those of ADM and LADM, e.g., allowing the penalty parameter to be unbounded and proving the sufficient and necessary conditions} for global convergence. We further propose a simple optimality measure and reveal the convergence rate of LADMPSAP in an ergodic sense. For programs with extra convex set constraints, with refined parameter estimation we devise a practical version of LADMPSAP for faster convergence. Finally, we generalize LADMPSAP to handle programs with more difficult objective functions by linearizing part of the objective function as well. LADMPSAP is particularly suitable for sparse representation and low-rank recovery problems because its subproblems have closed form solutions and the sparsity and low-rankness of the iterates can be preserved during the iteration. It is also highly parallelizable and hence fits for parallel or distributed computing. Numerical experiments testify to the advantages of LADMPSAP in speed and numerical accuracy.Comment: Preliminary version published on Asian Conference on Machine Learning 201
    corecore