197 research outputs found

    Nonconvex Distributed Optimization via Lasalle and Singular Perturbations

    Get PDF
    In this letter we address nonconvex distributed consensus optimization, a popular framework for distributed big-data analytics and learning. We consider the Gradient Tracking algorithm and, by resorting to an elegant system theoretical analysis, we show that agent estimates asymptotically reach consensus to a stationary point. We take advantage of suitable coordinates to write the Gradient Tracking as the interconnection of a fast dynamics and a slow one. To use a singular perturbation analysis, we separately study two auxiliary subsystems called boundary layer and reduced systems, respectively. We provide a Lyapunov function for the boundary layer system and use Lasalle-based arguments to show that trajectories of the reduced system converge to the set of stationary points. Finally, a customized version of a Lasalle's Invariance Principle for singularly perturbed systems is proved to show the convergence properties of the Gradient Tracking

    Constraint-Coupled Distributed Optimization: A Relaxation and Duality Approach

    Get PDF
    In this paper, we consider a general challenging distributed optimization setup arising in several important network control applications. Agents of a network want to minimize the sum of local cost functions, each one depending on a local variable, subject to local and coupling constraints, with the latter involving all the decision variables. We propose a novel fully distributed algorithm based on a relaxation of the primal problem and an elegant exploration of duality theory. Despite its complex derivation, based on several duality steps, the distributed algorithm has a very simple and intuitive structure. That is, each node finds a primal-dual optimal solution pair of a local relaxed version of the original problem and then updates suitable auxiliary local variables. We prove that agents asymptotically compute their portion of an optimal (feasible) solution of the original problem. This primal recovery property is obtained without any averaging mechanism typically used in dual decomposition methods. To corroborate the theoretical results, we show how the methodology applies to an instance of a distributed model-predictive control scheme in a microgrid control scenario

    A Deep Learning Approach for Distributed Aggregative Optimization with Users’ Feedback

    Get PDF
    We propose a novel distributed data-driven scheme for online aggregative optimization, i.e., the framework in which agents in a network aim to cooperatively minimize the sum of local time-varying costs, each depending on a local decision variable and an aggregation of all of them. We consider a “personalized” setup in which each cost exhibits a term capturing the user’s dissatisfaction and, thus, is unknown. We enhance an existing distributed optimization scheme by endowing it with a learning mechanism based on neural networks that estimate the missing part of the gradient via users’ feedback about the cost. Our algorithm combines two loops with different timescales devoted to performing optimization and learning steps. In turn, the proposed scheme also embeds a distributed consensus mechanism aimed at locally reconstructing the unavailable global information due to the presence of the aggregative variable. We prove an upper bound for the dynamic regret related to (i) the initial conditions, (ii) the temporal variations of the functions, and (iii) the learning errors about the unknown cost. Finally, we test our method via numerical simulations

    Distributed Mixed-Integer Linear Programming via Cut Generation and Constraint Exchange

    Get PDF
    Many problems of interest for cyber-physical network systems can be formulated as mixed-integer linear programs in which the constraints are distributed among the agents. In this paper, we propose a distributed algorithmic framework to solve this class of optimization problems in a peer-to-peer network with no coordinator and with limited computation and communication capabilities. At each communication round, agents locally solve a small linear program, generate suitable cutting planes, and communicate a fixed number of active constraints. Within the distributed framework, we first propose an algorithm that, under the assumption of integer-valued optimal cost, guarantees finite-time convergence to an optimal solution. Second, we propose an algorithm for general problems that provides a suboptimal solution up to a given tolerance in a finite number of communication rounds. Both algorithms work under asynchronous, directed, unreliable networks. Finally, through numerical computations, we analyze the algorithm scalability in terms of the network size. Moreover, for a multi-agent multi-task assignment problem, we show, consistently with the theory, its robustness to packet loss

    Distributed Primal Decomposition for Large-Scale MILPs

    Get PDF
    This paper deals with a distributed Mixed-Integer Linear Programming (MILP) set-up arising in several control applications. Agents of a network aim to minimize the sum of local linear cost functions subject to both individual constraints and a linear coupling constraint involving all the decision variables. A key, challenging feature of the considered set-up is that some components of the decision variables must assume integer values. The addressed MILPs are NP-hard, nonconvex and large-scale. Moreover, several additional challenges arise in a distributed framework due to the coupling constraint, so that feasible solutions with guaranteed suboptimality bounds are of interest. We propose a fully distributed algorithm based on a primal decomposition approach and an appropriate tightening of the coupling constraint. The algorithm is guaranteed to provide feasible solutions in finite time. Moreover, asymptotic and finite-time suboptimality bounds are established for the computed solution. Montecarlo simulations highlight the extremely low suboptimality bounds achieved by the algorithm

    Uniform non-convex optimisation via Extremum Seeking

    Get PDF
    The paper deals with a well-known extremum seeking scheme by proving uniformity properties with respect to the amplitudes of the dither signal and of the cost function. Those properties are then used to show that the scheme guarantees the global minimiser to be semi-global practically stable despite the presence of local minima. Under the assumption of a globally Lipschitz cost function, it is shown that the scheme, improved through a high-pass filter, makes the global minimiser practically stable with a global domain of attraction

    ChoiRbot: A ROS 2 Toolbox for Cooperative Robotics

    Get PDF
    In this letter, we introduce ChoiRbot, a toolbox for distributed cooperative robotics based on the novel Robot Operating System (ROS) 2. ChoiRbot provides a fully-functional toolset to execute complex distributed multi-robot tasks, either in simulation or experimentally, with a particular focus on networks of heterogeneous robots without a central coordinator. Thanks to its modular structure, ChoiRbot allows for a highly straight implementation of optimization-based distributed control schemes, such as distributed optimal control, model predictive control, task assignment, in which local computation and communication with neighboring robots are alternated. To this end, the toolbox provides functionalities for the solution of distributed optimization problems. The package can be also used to implement distributed feedback laws that do not need optimization features but do require the exchange of information among robots. The potential of the toolbox is illustrated with simulations and experiments on distributed robotics scenarios with mobile ground robots. The ChoiRbot toolbox is available at https://github.com/OPT4SMART/choirbot

    Frequency-modulated electromagnetic neural stimulation (FREMS) as a treatment for symptomatic diabetic neuropathy: results from a double-blind, randomised, multicentre, long-term, placebo-controlled clinical trial

    Get PDF
    AIMS/HYPOTHESIS: The aim was to evaluate the efficacy and safety of transcutaneous frequency-modulated electromagnetic neural stimulation (frequency rhythmic electrical modulation system, FREMS) as a treatment for symptomatic peripheral neuropathy in patients with diabetes mellitus. METHODS: This was a double-blind, randomised, multicentre, parallel-group study of three series, each of ten treatment sessions of FREMS or placebo administered within 3 weeks, 3 months apart, with an overall follow-up of about 51 weeks. The primary endpoint was the change in nerve conduction velocity (NCV) of deep peroneal, tibial and sural nerves. Secondary endpoints included the effects of treatment on pain, tactile, thermal and vibration sensations. Patients eligible to participate were aged 18-75 years with diabetes for ≥ 1 year, HbA(1c) <11.0% (97 mmol/mol), with symptomatic diabetic polyneuropathy at the lower extremities (i.e. abnormal amplitude, latency or NCV of either tibial, deep peroneal or sural nerve, but with an evocable potential and measurable NCV of the sural nerve), a Michigan Diabetes Neuropathy Score ≥ 7 and on a stable dose of medications for diabetic neuropathy in the month prior to enrolment. Data were collected in an outpatient setting. Participants were allocated to the FREMS or placebo arm (1:1 ratio) according to a sequence generated by a computer random number generator, without block or stratification factors. Investigators digitised patients' date of birth and site number into an interactive voice recording system to obtain the assigned treatment. Participants, investigators conducting the trial, or people assessing the outcomes were blinded to group assignment. RESULTS: Patients (n = 110) with symptomatic neuropathy were randomised to FREMS (n = 54) or placebo (n = 56). In the intention-to-treat population (50 FREMS, 51 placebo), changes in NCV of the three examined nerves were not different between FREMS and placebo (deep peroneal [means ± SE]: 0.74 ± 0.71 vs 0.06 ± 1.38 m/s; tibial: 2.08 ± 0.84 vs 0.61 ± 0.43 m/s; and sural: 0.80 ± 1.08 vs -0.91 ± 1.13 m/s; FREMS vs placebo, respectively). FREMS induced a significant reduction in day and night pain as measured by a visual analogue scale immediately after each treatment session, although this beneficial effect was no longer measurable 3 months after treatment. Compared with the placebo group, in the FREMS group the cold sensation threshold was significantly improved, while non-significant differences were observed in the vibration and warm sensation thresholds. No relevant side effects were recorded during the study. CONCLUSIONS/INTERPRETATION: FREMS proved to be a safe treatment for symptomatic diabetic neuropathy, with immediate, although transient, reduction in pain, and no effect on NCV. TRIAL REGISTRATION: ClinicalTrials.gov NCT01628627. FUNDING: The clinical trial was sponsored by Lorenz Biotech (Medolla, Italy), lately Lorenz Lifetech (Ozzano dell'Emilia, Italy)

    Convergence rate analysis of a subgradient averaging algorithm for distributed optimisation with different constraint sets

    Get PDF
    We consider a multi-agent setting with agents exchanging information over a network to solve a convex constrained optimisation problem in a distributed manner. We analyse a new algorithm based on local subgradient exchange under undirected time-varying communication. First, we prove asymptotic convergence of the iterates to a minimum of the given optimisation problem for time-varying step-sizes of the form c(k) = rac{eta }{{k + 1}}, for some \u3b7 &gt; 0. We then restrict attention to step-size choices c(k) = rac{eta }{{sqrt {k + 1} }},eta &gt; 0, and establish a convergence of mathcal{O}left( {rac{{ln (k)}}{{sqrt k }}} ight) in objective value. Our algorithm extends currently available distributed subgradient/proximal methods by: (i) accounting for different constraint sets at each node, and (ii) enhancing the convergence speed thanks to a subgradient averaging step performed by the agents. A numerical example demonstrates the efficacy of the proposed algorithm

    Enhanced gradient tracking algorithms for distributed quadratic optimization via sparse gain design

    Get PDF
    In this paper we propose a new control-oriented design technique to enhance the algorithmic performance of the distributed gradient tracking algorithm. We focus on a scenario in which agents in a network aim to cooperatively minimize the sum of convex, quadratic cost functions depending on a common decision variable. By leveraging a recent system-theoretical reinterpretation of the considered algorithmic framework as a closed-loop linear dynamical system, the proposed approach generalizes the diagonal gain structure associated to the existing gradient tracking algorithms. Specifically, we look for closed-loop gain matrices that satisfy the sparsity constraints imposed by the network topology, without however being necessarily diagonal, as in existing gradient tracking schemes. We propose a novel procedure to compute stabilizing sparse gain matrices by solving a set of nonlinear matrix inequalities, based on the solution of a sequence of approximate linear versions of such inequalities. Numerical simulations are presented showing the enhanced performance of the proposed design compared to existing gradient tracking algorithms
    corecore