1,033 research outputs found

    Multiplayer Cost Games with Simple Nash Equilibria

    Full text link
    Multiplayer games with selfish agents naturally occur in the design of distributed and embedded systems. As the goals of selfish agents are usually neither equivalent nor antagonistic to each other, such games are non zero-sum games. We study such games and show that a large class of these games, including games where the individual objectives are mean- or discounted-payoff, or quantitative reachability, and show that they do not only have a solution, but a simple solution. We establish the existence of Nash equilibria that are composed of k memoryless strategies for each agent in a setting with k agents, one main and k-1 minor strategies. The main strategy describes what happens when all agents comply, whereas the minor strategies ensure that all other agents immediately start to co-operate against the agent who first deviates from the plan. This simplicity is important, as rational agents are an idealisation. Realistically, agents have to decide on their moves with very limited resources, and complicated strategies that require exponential--or even non-elementary--implementations cannot realistically be implemented. The existence of simple strategies that we prove in this paper therefore holds a promise of implementability.Comment: 23 page

    Decision Problems for Nash Equilibria in Stochastic Games

    Get PDF
    We analyse the computational complexity of finding Nash equilibria in stochastic multiplayer games with ω\omega-regular objectives. While the existence of an equilibrium whose payoff falls into a certain interval may be undecidable, we single out several decidable restrictions of the problem. First, restricting the search space to stationary, or pure stationary, equilibria results in problems that are typically contained in PSPACE and NP, respectively. Second, we show that the existence of an equilibrium with a binary payoff (i.e. an equilibrium where each player either wins or loses with probability 1) is decidable. We also establish that the existence of a Nash equilibrium with a certain binary payoff entails the existence of an equilibrium with the same payoff in pure, finite-state strategies.Comment: 22 pages, revised versio

    Malicious Bayesian Congestion Games

    Full text link
    In this paper, we introduce malicious Bayesian congestion games as an extension to congestion games where players might act in a malicious way. In such a game each player has two types. Either the player is a rational player seeking to minimize her own delay, or - with a certain probability - the player is malicious in which case her only goal is to disturb the other players as much as possible. We show that such games do in general not possess a Bayesian Nash equilibrium in pure strategies (i.e. a pure Bayesian Nash equilibrium). Moreover, given a game, we show that it is NP-complete to decide whether it admits a pure Bayesian Nash equilibrium. This result even holds when resource latency functions are linear, each player is malicious with the same probability, and all strategy sets consist of singleton sets. For a slightly more restricted class of malicious Bayesian congestion games, we provide easy checkable properties that are necessary and sufficient for the existence of a pure Bayesian Nash equilibrium. In the second part of the paper we study the impact of the malicious types on the overall performance of the system (i.e. the social cost). To measure this impact, we use the Price of Malice. We provide (tight) bounds on the Price of Malice for an interesting class of malicious Bayesian congestion games. Moreover, we show that for certain congestion games the advent of malicious types can also be beneficial to the system in the sense that the social cost of the worst case equilibrium decreases. We provide a tight bound on the maximum factor by which this happens.Comment: 18 pages, submitted to WAOA'0

    Diagnostic and prognostic significance of systemic alkyl quinolones for P. aeruginosa in cystic fibrosis: a longitudinal study

    Get PDF
    Background Pulmonary P. aeruginosa infection is associated with poor outcomes in cystic fibrosis (CF) and early diagnosis is challenging, particularly in those who are unable to expectorate sputum. Specific P. aeruginosa 2-alkyl-4-quinolones are detectable in the sputum, plasma and urine of adults with CF, suggesting that they have potential as biomarkers for P. aeruginosa infection. Aim To investigate systemic 2-alkyl-4-quinolones as potential biomarkers for pulmonary P. aeruginosa infection. Methods A multicentre observational study of 176 adults and 68 children with CF. Cross-sectionally, comparisons were made between current P. aeruginosa infection using six 2-alkyl-4-quinolones detected in sputum, plasma and urine against hospital microbiological culture results. All participants without P. aeruginosa infection at baseline were followed up for one year to determine if 2-alkyl-4-quinolones were early biomarkers of pulmonary P. aeruginosa infection. Results Cross-sectional analysis: the most promising biomarker with the greatest diagnostic accuracy was 2-heptyl-4-hydroxyquinoline (HHQ). In adults, areas under the ROC curves (95% confidence intervals) for HHQ analyses were 0.82 (0.75–0.89) in sputum, 0.76 (0.69–0.82) in plasma and 0.82 (0.77–0.88) in urine. In children, the corresponding values for HHQ analyses were 0.88 (0.77–0.99) in plasma and 0.83 (0.68–0.97) in urine. Longitudinal analysis: Ten adults and six children had a new positive respiratory culture for P. aeruginosa in follow-up. A positive plasma HHQ test at baseline was significantly associated with a new positive culture for P. aeruginosa in both adults and children in follow-up (odds ratio (OR) = 6.67;-95% CI:-1.48–30.1;-p = 0.01 and OR = 70; 95% CI: 5–956;-p < 0.001 respectively). Conclusions AQs measured in sputum, plasma and urine may be used to diagnose current infection with P. aeruginosa in adults and children with CF. These preliminary data show that plasma HHQ may have potential as an early biomarker of pulmonary P. aeruginosa. Further studies are necessary to evaluate if HHQ could be used in clinical practice to aid early diagnosis of P. aeruginosa infection in the future

    Ecological Invasion, Roughened Fronts, and a Competitor's Extreme Advance: Integrating Stochastic Spatial-Growth Models

    Full text link
    Both community ecology and conservation biology seek further understanding of factors governing the advance of an invasive species. We model biological invasion as an individual-based, stochastic process on a two-dimensional landscape. An ecologically superior invader and a resident species compete for space preemptively. Our general model includes the basic contact process and a variant of the Eden model as special cases. We employ the concept of a "roughened" front to quantify effects of discreteness and stochasticity on invasion; we emphasize the probability distribution of the front-runner's relative position. That is, we analyze the location of the most advanced invader as the extreme deviation about the front's mean position. We find that a class of models with different assumptions about neighborhood interactions exhibit universal characteristics. That is, key features of the invasion dynamics span a class of models, independently of locally detailed demographic rules. Our results integrate theories of invasive spatial growth and generate novel hypotheses linking habitat or landscape size (length of the invading front) to invasion velocity, and to the relative position of the most advanced invader.Comment: The original publication is available at www.springerlink.com/content/8528v8563r7u2742

    Towards Machine Wald

    Get PDF
    The past century has seen a steady increase in the need of estimating and predicting complex systems and making (possibly critical) decisions with limited information. Although computers have made possible the numerical evaluation of sophisticated statistical models, these models are still designed \emph{by humans} because there is currently no known recipe or algorithm for dividing the design of a statistical model into a sequence of arithmetic operations. Indeed enabling computers to \emph{think} as \emph{humans} have the ability to do when faced with uncertainty is challenging in several major ways: (1) Finding optimal statistical models remains to be formulated as a well posed problem when information on the system of interest is incomplete and comes in the form of a complex combination of sample data, partial knowledge of constitutive relations and a limited description of the distribution of input random variables. (2) The space of admissible scenarios along with the space of relevant information, assumptions, and/or beliefs, tend to be infinite dimensional, whereas calculus on a computer is necessarily discrete and finite. With this purpose, this paper explores the foundations of a rigorous framework for the scientific computation of optimal statistical estimators/models and reviews their connections with Decision Theory, Machine Learning, Bayesian Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty Quantification and Information Based Complexity.Comment: 37 page

    Measurement of the B0-anti-B0-Oscillation Frequency with Inclusive Dilepton Events

    Get PDF
    The B0B^0-Bˉ0\bar B^0 oscillation frequency has been measured with a sample of 23 million \B\bar B pairs collected with the BABAR detector at the PEP-II asymmetric B Factory at SLAC. In this sample, we select events in which both B mesons decay semileptonically and use the charge of the leptons to identify the flavor of each B meson. A simultaneous fit to the decay time difference distributions for opposite- and same-sign dilepton events gives Δmd=0.493±0.012(stat)±0.009(syst)\Delta m_d = 0.493 \pm 0.012{(stat)}\pm 0.009{(syst)} ps1^{-1}.Comment: 7 pages, 1 figure, submitted to Physical Review Letter

    Detector Description and Performance for the First Coincidence Observations between LIGO and GEO

    Get PDF
    For 17 days in August and September 2002, the LIGO and GEO interferometer gravitational wave detectors were operated in coincidence to produce their first data for scientific analysis. Although the detectors were still far from their design sensitivity levels, the data can be used to place better upper limits on the flux of gravitational waves incident on the earth than previous direct measurements. This paper describes the instruments and the data in some detail, as a companion to analysis papers based on the first data.Comment: 41 pages, 9 figures 17 Sept 03: author list amended, minor editorial change

    Measurement of the polarisation of W bosons produced with large transverse momentum in pp collisions at sqrt(s) = 7 TeV with the ATLAS experiment

    Get PDF
    This paper describes an analysis of the angular distribution of W->enu and W->munu decays, using data from pp collisions at sqrt(s) = 7 TeV recorded with the ATLAS detector at the LHC in 2010, corresponding to an integrated luminosity of about 35 pb^-1. Using the decay lepton transverse momentum and the missing transverse energy, the W decay angular distribution projected onto the transverse plane is obtained and analysed in terms of helicity fractions f0, fL and fR over two ranges of W transverse momentum (ptw): 35 < ptw < 50 GeV and ptw > 50 GeV. Good agreement is found with theoretical predictions. For ptw > 50 GeV, the values of f0 and fL-fR, averaged over charge and lepton flavour, are measured to be : f0 = 0.127 +/- 0.030 +/- 0.108 and fL-fR = 0.252 +/- 0.017 +/- 0.030, where the first uncertainties are statistical, and the second include all systematic effects.Comment: 19 pages plus author list (34 pages total), 9 figures, 11 tables, revised author list, matches European Journal of Physics C versio

    Observation of a new chi_b state in radiative transitions to Upsilon(1S) and Upsilon(2S) at ATLAS

    Get PDF
    The chi_b(nP) quarkonium states are produced in proton-proton collisions at the Large Hadron Collider (LHC) at sqrt(s) = 7 TeV and recorded by the ATLAS detector. Using a data sample corresponding to an integrated luminosity of 4.4 fb^-1, these states are reconstructed through their radiative decays to Upsilon(1S,2S) with Upsilon->mu+mu-. In addition to the mass peaks corresponding to the decay modes chi_b(1P,2P)->Upsilon(1S)gamma, a new structure centered at a mass of 10.530+/-0.005 (stat.)+/-0.009 (syst.) GeV is also observed, in both the Upsilon(1S)gamma and Upsilon(2S)gamma decay modes. This is interpreted as the chi_b(3P) system.Comment: 5 pages plus author list (18 pages total), 2 figures, 1 table, corrected author list, matches final version in Physical Review Letter
    corecore