942,572 research outputs found

    Towards Fast-Convergence, Low-Delay and Low-Complexity Network Optimization

    Full text link
    Distributed network optimization has been studied for well over a decade. However, we still do not have a good idea of how to design schemes that can simultaneously provide good performance across the dimensions of utility optimality, convergence speed, and delay. To address these challenges, in this paper, we propose a new algorithmic framework with all these metrics approaching optimality. The salient features of our new algorithm are three-fold: (i) fast convergence: it converges with only O(log(1/ϵ))O(\log(1/\epsilon)) iterations that is the fastest speed among all the existing algorithms; (ii) low delay: it guarantees optimal utility with finite queue length; (iii) simple implementation: the control variables of this algorithm are based on virtual queues that do not require maintaining per-flow information. The new technique builds on a kind of inexact Uzawa method in the Alternating Directional Method of Multiplier, and provides a new theoretical path to prove global and linear convergence rate of such a method without requiring the full rank assumption of the constraint matrix

    Emergence and Growth of Complex Networks in Adaptive Systems

    Get PDF
    We consider the population dynamics of a set of species whose network of catalytic interactions is described by a directed graph. The relationship between the attractors of this dynamics and the underlying graph theoretic structures like cycles and autocatalytic sets is discussed. It is shown that when the population dynamics is suitably coupled to a slow dynamics of the graph itself, the network evolves towards increasing complexity driven by autocatalytic sets. Some quantitative measures of network complexity are described.Comment: 10 pages (including figures), 3 Postscript figure

    Insights into the relation between noise and biological complexity

    Full text link
    Understanding under which conditions the increase of systems complexity is evolutionary advantageous, and how this trend is related to the modulation of the intrinsic noise, are fascinating issues of utmost importance for synthetic and systems biology. To get insights into these matters, we analyzed chemical reaction networks with different topologies and degrees of complexity, interacting or not with the environment. We showed that the global level of fluctuations at the steady state, as measured by the sum of the Fano factors of the number of molecules of all species, is directly related to the topology of the network. For systems with zero deficiency, this sum is constant and equal to the rank of the network. For higher deficiencies, we observed an increase or decrease of the fluctuation levels according to the values of the reaction fluxes that link internal species, multiplied by the associated stoichiometry. We showed that the noise is reduced when the fluxes all flow towards the species of higher complexity, whereas it is amplified when the fluxes are directed towards lower complexity species.Comment: 5 pages, 3 figure

    A Faster Counting Protocol for Anonymous Dynamic Networks

    Get PDF
    We study the problem of counting the number of nodes in a slotted-time communication network, under the challenging assumption that nodes do not have identifiers and the network topology changes frequently. That is, for each time slot links among nodes can change arbitrarily provided that the network is always connected. Tolerating dynamic topologies is crucial in face of mobility and unreliable communication whereas, even if identifiers are available, it might be convenient to ignore them in massive networks with changing topology. Counting is a fundamental task in distributed computing since knowing the size of the system often facilitates the design of solutions for more complex problems. Currently, the best upper bound proved on the running time to compute the exact network size is double-exponential. However, only linear complexity lower bounds are known, leaving open the question of whether efficient Counting protocols for Anonymous Dynamic Networks exist or not. In this paper we make a significant step towards answering this question by presenting a distributed Counting protocol for Anonymous Dynamic Networks which has exponential time complexity. Our algorithm ensures that eventually every node knows the exact size of the system and stops executing the algorithm. Previous Counting protocols have either double-exponential time complexity, or they are exponential but do not terminate, or terminate but do not provide running-time guarantees, or guarantee only an exponential upper bound on the network size. Other protocols are heuristic and do not guarantee the correct count

    Emergence of Invariance and Disentanglement in Deep Representations

    Full text link
    Using established principles from Statistics and Information Theory, we show that invariance to nuisance factors in a deep neural network is equivalent to information minimality of the learned representation, and that stacking layers and injecting noise during training naturally bias the network towards learning invariant representations. We then decompose the cross-entropy loss used during training and highlight the presence of an inherent overfitting term. We propose regularizing the loss by bounding such a term in two equivalent ways: One with a Kullbach-Leibler term, which relates to a PAC-Bayes perspective; the other using the information in the weights as a measure of complexity of a learned model, yielding a novel Information Bottleneck for the weights. Finally, we show that invariance and independence of the components of the representation learned by the network are bounded above and below by the information in the weights, and therefore are implicitly optimized during training. The theory enables us to quantify and predict sharp phase transitions between underfitting and overfitting of random labels when using our regularized loss, which we verify in experiments, and sheds light on the relation between the geometry of the loss function, invariance properties of the learned representation, and generalization error.Comment: Deep learning, neural network, representation, flat minima, information bottleneck, overfitting, generalization, sufficiency, minimality, sensitivity, information complexity, stochastic gradient descent, regularization, total correlation, PAC-Baye
    corecore