5,551 research outputs found

    B\"acklund-Darboux Transformations and Discretizations of Super KdV Equation

    Full text link
    For a generalized super KdV equation, three Darboux transformations and the corresponding B\"acklund transformations are constructed. The compatibility of these Darboux transformations leads to three discrete systems and their Lax representations. The reduction of one of the B\"acklund-Darboux transformations and the corresponding discrete system are considered for Kupershmidt's super KdV equation. When all the odd variables vanish, a nonlinear superposition formula is obtained for Levi's B\"acklund transformation for the KdV equation

    On the Convergence of Decentralized Gradient Descent

    Full text link
    Consider the consensus problem of minimizing f(x)=i=1nfi(x)f(x)=\sum_{i=1}^n f_i(x) where each fif_i is only known to one individual agent ii out of a connected network of nn agents. All the agents shall collaboratively solve this problem and obtain the solution subject to data exchanges restricted to between neighboring agents. Such algorithms avoid the need of a fusion center, offer better network load balance, and improve data privacy. We study the decentralized gradient descent method in which each agent ii updates its variable x(i)x_{(i)}, which is a local approximate to the unknown variable xx, by combining the average of its neighbors' with the negative gradient step αfi(x(i))-\alpha \nabla f_i(x_{(i)}). The iteration is x(i)(k+1)neighborjofiwijx(j)(k)αfi(x(i)(k)),for each agenti,x_{(i)}(k+1) \gets \sum_{\text{neighbor} j \text{of} i} w_{ij} x_{(j)}(k) - \alpha \nabla f_i(x_{(i)}(k)),\quad\text{for each agent} i, where the averaging coefficients form a symmetric doubly stochastic matrix W=[wij]Rn×nW=[w_{ij}] \in \mathbb{R}^{n \times n}. We analyze the convergence of this iteration and derive its converge rate, assuming that each fif_i is proper closed convex and lower bounded, fi\nabla f_i is Lipschitz continuous with constant LfiL_{f_i}, and stepsize α\alpha is fixed. Provided that α<O(1/Lh)\alpha < O(1/L_h) where Lh=maxi{Lfi}L_h=\max_i\{L_{f_i}\}, the objective error at the averaged solution, f(1nix(i)(k))ff(\frac{1}{n}\sum_i x_{(i)}(k))-f^*, reduces at a speed of O(1/k)O(1/k) until it reaches O(α)O(\alpha). If fif_i are further (restricted) strongly convex, then both 1nix(i)(k)\frac{1}{n}\sum_i x_{(i)}(k) and each x(i)(k)x_{(i)}(k) converge to the global minimizer xx^* at a linear rate until reaching an O(α)O(\alpha)-neighborhood of xx^*. We also develop an iteration for decentralized basis pursuit and establish its linear convergence to an O(α)O(\alpha)-neighborhood of the true unknown sparse signal
    corecore