303 research outputs found

    Price and Capacity Competition

    Get PDF
    We study the efficiency of oligopoly equilibria in a model where firms compete over capacities and prices. The motivating example is a communication network where service providers invest in capacities and then compete in prices. Our model economy corresponds to a two-stage game. First, firms (service providers) independently choose their capacity levels. Second, after the capacity levels are observed, they set prices. Given the capacities and prices, users (consumers) allocate their demands across the firms. We first establish the existence of pure strategy subgame perfect equilibria (oligopoly equilibria) and characterize the set of equilibria. These equilibria feature pure strategies along the equilibrium path, but off-the-equilibrium path they are supported by mixed strategies. We then investigate the efficiency properties of these equilibria, where "efficiency" is defined as the ratio of surplus in equilibrium relative to the first best. We show that efficiency in the worst oligopoly equilibria of this game can be arbitrarily low. However, if the best oligopoly equilibrium is selected (among multiple equilibria), the worst-case efficiency loss has a tight bound, approximately equal to 5/6 with 2 firms. This bound monotonically decreases towards zero when the number of firms increases. We also suggest a simple way of implementing the best oligopoly equilibrium. With two firms, this involves the lower-cost firm acting as a Stackelberg leader and choosing its capacity first. We show that in this Stackelberg game form, there exists a unique equilibrium corresponding to the best oligopoly equilibrium. We also show that an alternative game form where capacities and prices are chosen simultaneously always fails to have a pure strategy equilibrium. These results suggest that the timing of capacity and price choices in oligopolistic environments is important both for the existence of equilibrium and for the extent of efficiency losses in equilibrium.

    Distributed Multi-Agent Optimization with State-Dependent Communication

    Get PDF
    We study distributed algorithms for solving global optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local objective function and constraint set, and exchanges information with the other agents over a randomly varying network topology to update his information state. We assume a state-dependent communication model over this topology: communication is Markovian with respect to the states of the agents and the probability with which the links are available depends on the states of the agents. In this paper, we study a projected multi-agent subgradient algorithm under state-dependent communication. The algorithm involves each agent performing a local averaging to combine his estimate with the other agents' estimates, taking a subgradient step along his local objective function, and projecting the estimates on his local constraint set. The state-dependence of the communication introduces significant challenges and couples the study of information exchange with the analysis of subgradient steps and projection errors. We first show that the multi-agent subgradient algorithm when used with a constant stepsize may result in the agent estimates to diverge with probability one. Under some assumptions on the stepsize sequence, we provide convergence rate bounds on a "disagreement metric" between the agent estimates. Our bounds are time-nonhomogeneous in the sense that they depend on the initial starting time. Despite this, we show that agent estimates reach an almost sure consensus and converge to the same optimal solution of the global optimization problem with probability one under different assumptions on the local constraint sets and the stepsize sequence

    A Distributed Newton Method for Network Utility Maximization

    Full text link
    Most existing work uses dual decomposition and subgradient methods to solve Network Utility Maximization (NUM) problems in a distributed manner, which suffer from slow rate of convergence properties. This work develops an alternative distributed Newton-type fast converging algorithm for solving network utility maximization problems with self-concordant utility functions. By using novel matrix splitting techniques, both primal and dual updates for the Newton step can be computed using iterative schemes in a decentralized manner with limited information exchange. Similarly, the stepsize can be obtained via an iterative consensus-based averaging scheme. We show that even when the Newton direction and the stepsize in our method are computed within some error (due to finite truncation of the iterative schemes), the resulting objective function value still converges superlinearly to an explicitly characterized error neighborhood. Simulation results demonstrate significant convergence rate improvement of our algorithm relative to the existing subgradient methods based on dual decomposition.Comment: 27 pages, 4 figures, LIDS report, submitted to CDC 201
    corecore