1,009 research outputs found

    Efficient computation of partition of unity interpolants through a block-based searching technique

    Full text link
    In this paper we propose a new efficient interpolation tool, extremely suitable for large scattered data sets. The partition of unity method is used and performed by blending Radial Basis Functions (RBFs) as local approximants and using locally supported weight functions. In particular we present a new space-partitioning data structure based on a partition of the underlying generic domain in blocks. This approach allows us to examine only a reduced number of blocks in the search process of the nearest neighbour points, leading to an optimized searching routine. Complexity analysis and numerical experiments in two- and three-dimensional interpolation support our findings. Some applications to geometric modelling are also considered. Moreover, the associated software package written in \textsc{Matlab} is here discussed and made available to the scientific community

    A sequential linear programming (SLP) approach for uncertainty analysis-based data-driven computational mechanics

    Full text link
    In this article, an efficient sequential linear programming algorithm (SLP) for uncertainty analysis-based data-driven computational mechanics (UA-DDCM) is presented. By assuming that the uncertain constitutive relationship embedded behind the prescribed data set can be characterized through a convex combination of the local data points, the upper and lower bounds of structural responses pertaining to the given data set, which are more valuable for making decisions in engineering design, can be found by solving a sequential of linear programming problems very efficiently. Numerical examples demonstrate the effectiveness of the proposed approach on sparse data set and its robustness with respect to the existence of noise and outliers in the data set

    Partition of Unity Interpolation on Multivariate Convex Domains

    Full text link
    In this paper we present a new algorithm for multivariate interpolation of scattered data sets lying in convex domains \Omega \subseteq \RR^N, for any N≥2N \geq 2. To organize the points in a multidimensional space, we build a kdkd-tree space-partitioning data structure, which is used to efficiently apply a partition of unity interpolant. This global scheme is combined with local radial basis function approximants and compactly supported weight functions. A detailed description of the algorithm for convex domains and a complexity analysis of the computational procedures are also considered. Several numerical experiments show the performances of the interpolation algorithm on various sets of Halton data points contained in Ω\Omega, where Ω\Omega can be any convex domain like a 2D polygon or a 3D polyhedron

    SVM via Saddle Point Optimization: New Bounds and Distributed Algorithms

    Get PDF
    We study two important SVM variants: hard-margin SVM (for linearly separable cases) and ν\nu-SVM (for linearly non-separable cases). We propose new algorithms from the perspective of saddle point optimization. Our algorithms achieve (1−ϵ)(1-\epsilon)-approximations with running time O~(nd+nd/ϵ)\tilde{O}(nd+n\sqrt{d / \epsilon}) for both variants, where nn is the number of points and dd is the dimensionality. To the best of our knowledge, the current best algorithm for ν\nu-SVM is based on quadratic programming approach which requires Ω(n2d)\Omega(n^2 d) time in worst case~\cite{joachims1998making,platt199912}. In the paper, we provide the first nearly linear time algorithm for ν\nu-SVM. The current best algorithm for hard margin SVM achieved by Gilbert algorithm~\cite{gartner2009coresets} requires O(nd/ϵ)O(nd / \epsilon ) time. Our algorithm improves the running time by a factor of d/ϵ\sqrt{d}/\sqrt{\epsilon}. Moreover, our algorithms can be implemented in the distributed settings naturally. We prove that our algorithms require O~(k(d+d/ϵ))\tilde{O}(k(d +\sqrt{d/\epsilon})) communication cost, where kk is the number of clients, which almost matches the theoretical lower bound. Numerical experiments support our theory and show that our algorithms converge faster on high dimensional, large and dense data sets, as compared to previous methods

    A Convex Approach to Consensus on SO(n)

    Full text link
    This paper introduces several new algorithms for consensus over the special orthogonal group. By relying on a convex relaxation of the space of rotation matrices, consensus over rotation elements is reduced to solving a convex problem with a unique global solution. The consensus protocol is then implemented as a distributed optimization using (i) dual decomposition, and (ii) both semi and fully distributed variants of the alternating direction method of multipliers technique -- all with strong convergence guarantees. The convex relaxation is shown to be exact at all iterations of the dual decomposition based method, and exact once consensus is reached in the case of the alternating direction method of multipliers. Further, analytic and/or efficient solutions are provided for each iteration of these distributed computation schemes, allowing consensus to be reached without any online optimization. Examples in satellite attitude alignment with up to 100 agents, an estimation problem from computer vision, and a rotation averaging problem on SO(6)SO(6) validate the approach.Comment: Accepted to 52nd Annual Allerton Conference on Communication, Control, and Computin
    • …
    corecore