23 research outputs found

    How Fast Do Equilibrium Payoff Sets Converge in Repeated Games?

    Get PDF
    We provide tight bounds on the rate of convergence of the equilibrium payoff sets for repeated games under both perfect and imperfect public monitoring. The distance between the equilibrium payoff set and its limit vanishes at rate (1 − δ) 1/2 under perfect monitoring, and at rate (1 − δ) 1/4 under imperfect monitoring. For strictly individually rational payoff vectors, these rates improve to 0 (i.e., all strictly individually rational payoff vectors are exactly achieved as equilibrium payoffs for delta high enough) and (1 − δ) 1/2 , respectively

    A stochastic game framework for analyzing computational investment strategies in distributed computing

    Get PDF
    We study a stochastic game framework with dynamic set of players, for modeling and analyzing their computational investment strategies in distributed computing. Players obtain a certain reward for solving the problem or for providing their computational resources, while incur a certain cost based on the invested time and computational power. We first study a scenario where the reward is offered for solving the problem, such as in blockchain mining. We show that, in Markov perfect equilibrium, players with cost parameters exceeding a certain threshold, do not invest; while those with cost parameters less than this threshold, invest maximal power. Here, players need not know the system state. We then consider a scenario where the reward is offered for contributing to the computational power of a common central entity, such as in volunteer computing. Here, in Markov perfect equilibrium, only players with cost parameters in a relatively low range in a given state, invest. For the case where players are homogeneous, they invest proportionally to the `reward to cost' ratio. For both the scenarios, we study the effects of players' arrival and departure rates on their utilities using simulations and provide additional insights

    Essays on strategic queueing

    Get PDF
    This thesis includes three essays exploring some economic implications of queueing. A preliminary chapter introducing useful results from the literature which help contextualize the original research in the thesis is presented first. This introductory chapter starts by surveying queueing results from probability theory and operations research. Then it covers a few seminal papers on strategic queueing, mostly but not exclusively from the economics literature. These cover issues of individual and social welfare in the context of First Come First Served (FCFS) and Equitable Processor Sharing (EPS) queues, with one or multiple servers, as well as a discussion of strategic interactions surrounding queue cutting. Then an overview of some important papers on the impact of queueing on competitive behaviour, mostly Industrial Organization economists, is presented. The first original chapter presents a model for the endogenous determination of the number of queues in an M/M/2 system. Customers arriving at a system where two customers are being served play a game, choosing between two parallel queues or one single queue. Subgame perfect equilibria are obtained, varying with customer characteristics and game specifications. With risk neutrality and when jockeying is not permitted, a single queue is an equilibrium, as is two queues. With risk neutrality and jockeying allowed, there is a unique two queue equilibrium. With risk aversion and no jockeying, there is a unique single queue equilibrium, and with risk aversion and jockeying, the equilibrium depends on the magnitude of risk aversion. The second chapter analyses the individual decisions taken by consumers when deciding whether to join an M/M/1 queue where a subset of customers who interact repeatedly can both cut the queue and be overtaken once they join, by-passing occasional users. This is shown to be an equilibrium in repeated games for sufficiently patient customers. The expected sojourn time for customers under this discipline is described as a solution of a system of difference equations, and this is then used to obtain a threshold joining strategy for arrivals, which is independent of the number of regular customers in the queue, as regulars form a sub-queue under the LCFS discipline. Numerical methods are then employed to contrast sojourn times and thresholds with the equilibrium for a strict First Come First Served queueing discipline, and with the socially optimal joining rule. Finally, the third chapter describes a duopoly market for healthcare where one of the two providers is publicly owned and charges a price of zero, while the other sets a price so as to maximize its profit. Both providers are subject to congestion in the form of an M/M/1 queue, and they serve patient-customers with randomly distributed unit costs of time. Consumer demand (as market share) for both providers is obtained and described with its full complement of comparative statics. The private provider’s pricing decision is explored, and equilibrium existence is proven. Social welfare functions are described and the welfare maximizing condition obtained. Numerical simulations with uniform and Kumaraswamy distributions are performed for several parameter values, showcasing the pricing provider’s decision and its relationship with social welfare

    A game-theoretic approach to fairness for a distributed reservation-based medium access

    Get PDF
    [no abstract

    Generalized asset integrity games

    Get PDF
    Generalized assets represent a class of multi-scale adaptive state-transition systems with domain-oblivious performance criteria. The governance of such assets must proceed without exact specifications, objectives, or constraints. Decision making must rapidly scale in the presence of uncertainty, complexity, and intelligent adversaries. This thesis formulates an architecture for generalized asset planning. Assets are modelled as dynamical graph structures which admit topological performance indicators, such as dependability, resilience, and efficiency. These metrics are used to construct robust model configurations. A normalized compression distance (NCD) is computed between a given active/live asset model and a reference configuration to produce an integrity score. The utility derived from the asset is monotonically proportional to this integrity score, which represents the proximity to ideal conditions. The present work considers the situation between an asset manager and an intelligent adversary, who act within a stochastic environment to control the integrity state of the asset. A generalized asset integrity game engine (GAIGE) is developed, which implements anytime algorithms to solve a stochastically perturbed two-player zero-sum game. The resulting planning strategies seek to stabilize deviations from minimax trajectories of the integrity score. Results demonstrate the performance and scalability of the GAIGE. This approach represents a first-step towards domain-oblivious architectures for complex asset governance and anytime planning

    The Role of Consumer Behaviour in Service Operations Management

    Get PDF
    In this thesis, I study the impact of consumer behaviour on service providers’ operations. In the first study, I consider service systems where customers do not know the distribution of uncertain service quality and cannot estimate it fully rationally. Instead, they form their beliefs by taking the average of several anecdotes, the size of which measures their level of bounded rationality. I characterise the customers’ joining behaviour and the service provider’s pricing, quality control, and information disclosure decisions. Bounded rationality induces customers to form different estimates of the service quality and leads the service provider to use pricing as a market segmentation tool, which is radically different from the full rationality setting. When the service provider also has control over quality, I find that it may reduce both quality and price as customers gather more anecdotes. In addition, a high-quality service provider may not disclose quality information if the sample size is small. In the second study, I analyse the performance of opaque selling in countering the negative revenue impact from consumers’ strategic waiting behaviour in vertically differentiated markets. The advantage of opaque selling is to increase the firm’s regular price, whereas the disadvantage lies in the inflexibility of segmenting different types of consumers. Both the advantage and the disadvantage are radically different from their counterparts in horizontally differentiated markets, and this contrast generates opposite policy recommendations across the two settings. In the third study, I investigate an online store’s product return policy when competing with a physical store, in which consumers can try the product before purchase. I find that the online store should offer product return only if it is socially efficient. Moreover, it should allocate product return cost between the online store and the consumers to minimise the total return cost
    corecore