13,459 research outputs found

    Distributed stochastic optimization via matrix exponential learning

    Get PDF
    In this paper, we investigate a distributed learning scheme for a broad class of stochastic optimization problems and games that arise in signal processing and wireless communications. The proposed algorithm relies on the method of matrix exponential learning (MXL) and only requires locally computable gradient observations that are possibly imperfect and/or obsolete. To analyze it, we introduce the notion of a stable Nash equilibrium and we show that the algorithm is globally convergent to such equilibria - or locally convergent when an equilibrium is only locally stable. We also derive an explicit linear bound for the algorithm's convergence speed, which remains valid under measurement errors and uncertainty of arbitrarily high variance. To validate our theoretical analysis, we test the algorithm in realistic multi-carrier/multiple-antenna wireless scenarios where several users seek to maximize their energy efficiency. Our results show that learning allows users to attain a net increase between 100% and 500% in energy efficiency, even under very high uncertainty.Comment: 31 pages, 3 figure

    Parameters identification of unknown delayed genetic regulatory networks by a switching particle swarm optimization algorithm

    Get PDF
    The official published version can be found at the link below.This paper presents a novel particle swarm optimization (PSO) algorithm based on Markov chains and competitive penalized method. Such an algorithm is developed to solve global optimization problems with applications in identifying unknown parameters of a class of genetic regulatory networks (GRNs). By using an evolutionary factor, a new switching PSO (SPSO) algorithm is first proposed and analyzed, where the velocity updating equation jumps from one mode to another according to a Markov chain, and acceleration coefficients are dependent on mode switching. Furthermore, a leader competitive penalized multi-learning approach (LCPMLA) is introduced to improve the global search ability and refine the convergent solutions. The LCPMLA can automatically choose search strategy using a learning and penalizing mechanism. The presented SPSO algorithm is compared with some well-known PSO algorithms in the experiments. It is shown that the SPSO algorithm has faster local convergence speed, higher accuracy and algorithm reliability, resulting in better balance between the global and local searching of the algorithm, and thus generating good performance. Finally, we utilize the presented SPSO algorithm to identify not only the unknown parameters but also the coupling topology and time-delay of a class of GRNs.This research was partially supported by the National Natural Science Foundation of PR China (Grant No. 60874113), the Research Fund for the Doctoral Program of Higher Education (Grant No. 200802550007), the Key Creative Project of Shanghai Education Community (Grant No. 09ZZ66), the Key Foundation Project of Shanghai (Grant No. 09JC1400700), the Engineering and Physical Sciences Research Council EPSRC of the UK under Grant No. GR/S27658/01, the International Science and Technology Cooperation Project of China under Grant No. 2009DFA32050, an International Joint Project sponsored by the Royal Society of the UK, and the Alexander von Humboldt Foundation of Germany

    On the Global Linear Convergence of the ADMM with Multi-Block Variables

    Full text link
    The alternating direction method of multipliers (ADMM) has been widely used for solving structured convex optimization problems. In particular, the ADMM can solve convex programs that minimize the sum of NN convex functions with NN-block variables linked by some linear constraints. While the convergence of the ADMM for N=2N=2 was well established in the literature, it remained an open problem for a long time whether or not the ADMM for N3N \ge 3 is still convergent. Recently, it was shown in [3] that without further conditions the ADMM for N3N\ge 3 may actually fail to converge. In this paper, we show that under some easily verifiable and reasonable conditions the global linear convergence of the ADMM when N3N\geq 3 can still be assured, which is important since the ADMM is a popular method for solving large scale multi-block optimization models and is known to perform very well in practice even when N3N\ge 3. Our study aims to offer an explanation for this phenomenon

    Chaotic Quantum Double Delta Swarm Algorithm using Chebyshev Maps: Theoretical Foundations, Performance Analyses and Convergence Issues

    Full text link
    Quantum Double Delta Swarm (QDDS) Algorithm is a new metaheuristic algorithm inspired by the convergence mechanism to the center of potential generated within a single well of a spatially co-located double-delta well setup. It mimics the wave nature of candidate positions in solution spaces and draws upon quantum mechanical interpretations much like other quantum-inspired computational intelligence paradigms. In this work, we introduce a Chebyshev map driven chaotic perturbation in the optimization phase of the algorithm to diversify weights placed on contemporary and historical, socially-optimal agents' solutions. We follow this up with a characterization of solution quality on a suite of 23 single-objective functions and carry out a comparative analysis with eight other related nature-inspired approaches. By comparing solution quality and successful runs over dynamic solution ranges, insights about the nature of convergence are obtained. A two-tailed t-test establishes the statistical significance of the solution data whereas Cohen's d and Hedge's g values provide a measure of effect sizes. We trace the trajectory of the fittest pseudo-agent over all function evaluations to comment on the dynamics of the system and prove that the proposed algorithm is theoretically globally convergent under the assumptions adopted for proofs of other closely-related random search algorithms.Comment: 27 pages, 4 figures, 19 table
    corecore