22 research outputs found

    Living at the Edge: A Large Deviations Approach to the Outage MIMO Capacity

    Full text link
    Using a large deviations approach we calculate the probability distribution of the mutual information of MIMO channels in the limit of large antenna numbers. In contrast to previous methods that only focused at the distribution close to its mean (thus obtaining an asymptotically Gaussian distribution), we calculate the full distribution, including its tails which strongly deviate from the Gaussian behavior near the mean. The resulting distribution interpolates seamlessly between the Gaussian approximation for rates RR close to the ergodic value of the mutual information and the approach of Zheng and Tse for large signal to noise ratios ρ\rho. This calculation provides us with a tool to obtain outage probabilities analytically at any point in the (R,ρ,N)(R, \rho, N) parameter space, as long as the number of antennas NN is not too small. In addition, this method also yields the probability distribution of eigenvalues constrained in the subspace where the mutual information per antenna is fixed to RR for a given ρ\rho. Quite remarkably, this eigenvalue density is of the form of the Marcenko-Pastur distribution with square-root singularities, and it depends on the values of RR and ρ\rho.Comment: Accepted for publication, IEEE Transactions on Information Theory (2010). Part of this work appears in the Proc. IEEE Information Theory Workshop, June 2009, Volos, Greec

    Correlated anarchy in overlapping wireless networks

    No full text
    We investigate the behavior of a large number of selfish users that are able to switch dynamically between multiple wireless access-points (possibly belonging to different standards) by introducing an iterated non-cooperative game. Users start out completely uneducated and naïve but, by using a fixed set of strategies to process a broadcasted training signal, they quickly evolve and converge to an evolutionarily stable equilibrium. Then, in order to measure efficiency in this steady state, we adapt the notion of the price of anarchy to our setting and we obtain an explicit analytic estimate for it by using methods from statistical physics (namely the theory of replicas). Surprisingly, we find that the price of anarchy does not depend on the specifics of the wireless nodes (e.g. spectral efficiency) but only on the number of strategies per user and a particular combination of the number of nodes, the number of users and the size of the training signal. Finally, we map this game to the well-studied minority game, generalizing its analysis to an arbitrary number of choices. © 2008 IEEE

    The emergence of rational behavior in the presence of stochastic perturbations

    No full text
    We study repeated games where players use an exponential learning scheme in order to adapt to an ever-changing environment. If the game's payoffs are subject to random perturbations, this scheme leads to a new stochastic version of the replicator dynamics that is quite different from the "aggregate shocks" approach of evolutionary game theory. Irrespective of the perturbations' magnitude, we find that strategies which are dominated (even iteratively) eventually become extinct and that the game's strict Nash equilibria are stochastically asymptotically stable. We complement our analysis by illustrating these results in the case of congestion games. © Institute of Mathematical Statistics, 2010

    Power Optimization in Random Wireless Networks

    No full text
    In this paper, we analyze the problem of power control in large, random wireless networks that are obtained by "erasing" a finite fraction of nodes from a regular d-dimensional lattice of N transmit-receive pairs. In this model, which has the important feature of a minimum distance between transmitter nodes, we find that when the network is infinite, power control is always feasible below a positive critical value of the users' signal-to-interference-plus-noise ratio (SINR) target. Drawing on tools and ideas from statistical physics, we show how this problem can be mapped to the Anderson impurity model for diffusion in random media. In this way, by employing the so-called coherent potential approximation method, we calculate the average power in the system (and its variance) for 1-D and 2-D networks. This approach is equivalent to traditional techniques from random matrix theory and is in excellent agreement with the numerical simulations; however, it fails to predict when power control becomes infeasible. In this regard, even though infinitely large systems are always unstable beyond a critical value of the users' SINR target, finite systems remain stable with high probability even beyond this critical SINR threshold. We calculate this probability by analyzing the density of low lying eigenvalues of an associated random Schrodinger operator, and we show that the network can exceed this critical SINR threshold by at least O((log N)-2/d) before undergoing a phase transition to the unstable regime. Finally, using the same techniques, we also calculate the tails of the distribution of transmit power in the system and the rate of convergence of the Foschini-Miljanic power control algorithm in the presence of random erasures. © 2016 IEEE

    Living at the edge: A large deviations approach to the outage MIMO capacity

    No full text
    A large deviations approach is introduced, which calculates the probability density and outage probability of the multiple-input multiple-output (MIMO) mutual information, and is valid for large antenna numbers N. In contrast to previous asymptotic methods that only focused on the distribution close to its most probable value, this methodology obtains the full distribution, including its non-Gaussian tails. The resulting distribution interpolates between the Gaussian approximation for rates R close its mean and the asymptotic distribution for large signalto-noise ratios (SNRs) ρ [1]. For large enough N, this method provides the outage probability over the whole (R, ρ) parameter space. The presented analytic results agree very well with numerical simulations over a wide range of outage probabilities, even for small N. In addition, the outage probability thus obtained is more robust over a wide range of ρ and R than either the Gaussian or the large-ρ approximations, providing an attractive alternative in calculating the probability density of the MIMO mutual information. Interestingly, this method also yields the eigenvalue density constrained in the subset where the mutual information is fixed to R for given ρ. Quite remarkably, this eigenvalue density has the form of the Marčenko-Pastur distribution with square-root singularities. © 2011 IEEE

    Multiagent Online Learning in Time-Varying Games

    No full text

    Robust power management via learning and game design

    No full text
    We consider the target-rate power management problem for wireless networks; and we propose two simple, distributed power management schemes that regulate power in a provably robust manner by efficiently leveraging past information. Both schemes are obtained via a combined approach of learning and “game design” where we (1) design a game with suitable payoff functions such that the optimal joint power profile in the original power management problem is the unique Nash equilibrium of the designed game; (2) derive distributed power management algorithms by directing the networks’ users to employ a no-regret learning algorithm to maximize their individual utility over time. To establish convergence, we focus on the well-known online eager gradient descent learning algorithm in the class of weighted strongly monotone games. In this class of games, we show that when players only have access to imperfect stochastic feedback, multiagent online eager gradient descent converges to the unique Nash equilibrium in mean square at a O(T1) rate. In the context of power management in static networks, we show that the designed games are weighted strongly monotone if the network is feasible (i.e., when all users can concurrently attain their target rates). This allows us to derive a geometric convergence rate to the joint optimal transmission power. More importantly, in stochastic networks where channel quality fluctuates over time, the designed games are also weighted strongly monotone and the proposed algorithms converge in mean square to the joint optimal transmission power at a O(T1) rate, even when the network is only feasible on average (i.e., users may be unable to meet their requirements with positive probability). This comes in stark contrast to existing algorithms (like the seminal Foschini–Miljanic algorithm and its variants) that may fail to converge altogether. Copyright: © 2020 INFORM

    Distributed learning policies forPower allocation in multiple access channels

    No full text
    We analyze the power allocation problem for orthogonal multiple access channels by means of a non-cooperative potential game in which each user distributes his power over the channels available to him. When the channels are static, we show that this game possesses a unique equilibrium; moreover, if the network’s users follow a distributed learning scheme based on the replicator dynamics of evolutionary game theory, then they converge to equilibrium exponentially fast. On the other hand, if the channels fluctuate stochastically over time, the associated game still admits a unique equilibrium, but the learning process is not deterministic; just the same, by employing the theory of stochastic approximation, we find that users still converge to equilibrium. Our theoretical analysis hinges on a novel result which is of independent interest: in finite-player games which admit a (possibly nonlinear) convex potential, the replicator dynamics converge to an -neighborhood of an equilibrium in time O(log(1/ε)). © 2012 IEEE

    Multiagent Online Learning in Time-Varying Games

    No full text
    We examine the long-run behavior of multiagent online learning in games that evolve over time. Specifically, we focus on a wide class of policies based on mirror descent, and we show that the induced sequence of play (a) converges to a Nash equilibrium in time-varying games that stabilize in the long run to a strictly monotone limit, and (b) it stays asymptotically close to the evolving equilibrium of the sequence of stage games (assuming they are strongly monotone). Our results apply to both gradient- and payoffbased feedback???that is, when players only get to observe the payoffs of their chosen actions
    corecore