9,122 research outputs found
Connectivity and equilibrium in random games
We study how the structure of the interaction graph of a game affects the
existence of pure Nash equilibria. In particular, for a fixed interaction
graph, we are interested in whether there are pure Nash equilibria arising when
random utility tables are assigned to the players. We provide conditions for
the structure of the graph under which equilibria are likely to exist and
complementary conditions which make the existence of equilibria highly
unlikely. Our results have immediate implications for many deterministic graphs
and generalize known results for random games on the complete graph. In
particular, our results imply that the probability that bounded degree graphs
have pure Nash equilibria is exponentially small in the size of the graph and
yield a simple algorithm that finds small nonexistence certificates for a large
family of graphs. Then we show that in any strongly connected graph of n
vertices with expansion  the distribution of the number
of equilibria approaches the Poisson distribution with parameter 1,
asymptotically as .Comment: Published in at http://dx.doi.org/10.1214/10-AAP715 the Annals of
  Applied Probability (http://www.imstat.org/aap/) by the Institute of
  Mathematical Statistics (http://www.imstat.org
Searches for the Higgs Boson in CMS
The CMS potential for the Higgs boson discovery is discussed in the framework of the Standard Model (SM) and its Minimal Supersymmetric extension (MSSM)
Stretching Method-Based Operational Modal Analysis of An Old Masonry Lighthouse.
We present in this paper a structural health monitoring study of the Egyptian lighthouse of Rethymnon in Crete, Greece. Using structural vibration data collected on a limited number of sensors during a 3-month period, we illustrate the potential of the stretching method for monitoring variations in the natural frequencies of the structure. The stretching method compares two signals, the current that refers to the actual state of the structure, with the reference one that characterizes the structure at a reference healthy condition. For the structure under study, an 8-day time interval is used for the reference quantity while the current quantity is computed using a time window of 24 h. Our results indicate that frequency shifts of 1% can be detected with high accuracy allowing for early damage assessment. We also provide a simple numerical model that is calibrated to match the natural frequencies estimated using the stretching method. The model is used to produce possible damage scenarios that correspond to 1% shift in the first natural frequencies. Although simple in nature, this model seems to deliver a realistic response of the structure. This is shown by comparing the response at the top of the structure to the actual measurement during a small earthquake. This is a preliminary study indicating the potential of the stretching method for structural health monitoring of historical monuments. The results are very promising. Further analysis is necessary requiring the deployment of the instrumentation (possibly with additional instruments) for a longer period of time
Computer-aided verification in mechanism design
In mechanism design, the gold standard solution concepts are dominant
strategy incentive compatibility and Bayesian incentive compatibility. These
solution concepts relieve the (possibly unsophisticated) bidders from the need
to engage in complicated strategizing. While incentive properties are simple to
state, their proofs are specific to the mechanism and can be quite complex.
This raises two concerns. From a practical perspective, checking a complex
proof can be a tedious process, often requiring experts knowledgeable in
mechanism design. Furthermore, from a modeling perspective, if unsophisticated
agents are unconvinced of incentive properties, they may strategize in
unpredictable ways.
  To address both concerns, we explore techniques from computer-aided
verification to construct formal proofs of incentive properties. Because formal
proofs can be automatically checked, agents do not need to manually check the
properties, or even understand the proof. To demonstrate, we present the
verification of a sophisticated mechanism: the generic reduction from Bayesian
incentive compatible mechanism design to algorithm design given by Hartline,
Kleinberg, and Malekian. This mechanism presents new challenges for formal
verification, including essential use of randomness from both the execution of
the mechanism and from the prior type distributions. As an immediate
consequence, our work also formalizes Bayesian incentive compatibility for the
entire family of mechanisms derived via this reduction. Finally, as an
intermediate step in our formalization, we provide the first formal
verification of incentive compatibility for the celebrated
Vickrey-Clarke-Groves mechanism
A Direct Reduction from k-Player to 2-Player Approximate Nash Equilibrium
We present a direct reduction from k-player games to 2-player games that
preserves approximate Nash equilibrium. Previously, the computational
equivalence of computing approximate Nash equilibrium in k-player and 2-player
games was established via an indirect reduction. This included a sequence of
works defining the complexity class PPAD, identifying complete problems for
this class, showing that computing approximate Nash equilibrium for k-player
games is in PPAD, and reducing a PPAD-complete problem to computing approximate
Nash equilibrium for 2-player games. Our direct reduction makes no use of the
concept of PPAD, thus eliminating some of the difficulties involved in
following the known indirect reduction.Comment: 21 page
Does A Short, Thick Neck Predict Obstructive Sleep Apnea?: The Role of Physical Examination in OSA Screening
Purpose: 
The purpose of this study was to determine whether a short neck, alone or together with a thick neck, can predict obstructive sleep apnea (OSA).
Methods: 
The laryngeal heights of 169 new adult patients presenting to a sleep medicine physician were measured over a period of 5 months. Neck circumference, Mallampati score, and body-mass index (BMI) were also determined, together with medical history, smoking status, and serum bicarbonate. Lastly, patients’ polysomnograms were obtained in order to ascertain the presence or absence of OSA as indicated by the apnea-hypopnea index, as well as other sleep study parameters.
Results: 
No association was found between laryngeal height and presence of OSA, bicarbonate concentration or oxygen saturation. Of interest, neck circumference was also not significantly associated with any of the aforementioned parameters, although there was a trend towards significance in its association with OSA (p=0.055). Still, a combined short laryngeal height and large neck circumference was associated with lower nadir SaO2 (p=0.018). Of all clinical parameters we measured, only higher BMI, older age and male sex were positively associated with OSA (p\u3c0.05).
Conclusion: 
This study challenges the popular notion that short necks predict OSA
Robust seismic velocity change estimation using ambient noise recordings
We consider the problem of seismic velocity change estimation using ambient
noise recordings. Motivated by [23] we study how the velocity change estimation
is affected by seasonal fluctuations in the noise sources. More precisely, we
consider a numerical model and introduce spatio-temporal seasonal fluctuations
in the noise sources. We show that indeed, as pointed out in [23], the
stretching method is affected by these fluctuations and produces misleading
apparent velocity variations which reduce dramatically the signal to noise
ratio of the method. We also show that these apparent velocity variations can
be eliminated by an adequate normalization of the cross-correlation functions.
Theoretically we expect our approach to work as long as the seasonal
fluctuations in the noise sources are uniform, an assumption which holds for
closely located seismic stations. We illustrate with numerical simulations and
real measurements that the proposed normalization significantly improves the
accuracy of the velocity change estimation
Efficiency in Multi-objective Games
In a multi-objective game, each agent individually evaluates each overall
action-profile on multiple objectives. I generalize the price of anarchy to
multi-objective games and provide a polynomial-time algorithm to assess it.
This work asserts that policies on tobacco promote a higher economic
efficiency
Epigenetics as a mechanism driving polygenic clinical drug resistance
Aberrant methylation of CpG islands located at or near gene promoters is associated with inactivation of gene expression during tumour development. It is increasingly recognised that such epimutations may occur at a much higher frequency than gene mutation and therefore have a greater impact on selection of subpopulations of cells during tumour progression or acquisition of resistance to anticancer drugs. Although laboratory-based models of acquired resistance to anticancer agents tend to focus on specific genes or biochemical pathways, such 'one gene : one outcome' models may be an oversimplification of acquired resistance to treatment of cancer patients. Instead, clinical drug resistance may be due to changes in expression of a large number of genes that have a cumulative impact on chemosensitivity. Aberrant CpG island methylation of multiple genes occurring in a nonrandom manner during tumour development and during the acquisition of drug resistance provides a mechanism whereby expression of multiple genes could be affected simultaneously resulting in polygenic clinical drug resistance. If simultaneous epigenetic regulation of multiple genes is indeed a major driving force behind acquired resistance of patients' tumour to anticancer agents, this has important implications for biomarker studies of clinical outcome following chemotherapy and for clinical approaches designed to circumvent or modulate drug resistance
Recommended from our members
Multi-model evaluation of short-lived pollutant distributions over East Asia during summer 2008
The ability of seven state of the art chemistry-aerosol models to reproduce distributions of tropospheric ozone and its precursors, as well as aerosols over eastern Asia in summer 2008 is evaluated. The study focuses on the performance of models used to assess impacts of pollutants on climate and air quality as part of the EU ECLIPSE project. Models, run using the same ECLIPSE emissions, are compared over different spatial scales to in-situ surface, vertical profile and satellite data. Several rather clear biases are found between model results and observations including overestimation of ozone at rural locations
downwind of the main emission regions in China as well as downwind over the Pacific. Several models produce too much
ozone over polluted regions which is then transported downwind. Analysis points to different factors related to the ability of models to simulate VOC limited regimes over polluted regions and NOx limited regimes downwind. This may also be linked to biases compared to satellite NO2 indicating overestimation of NO2 over and to the north of the northern China Plain emission region. On the other hand, model NO2 is too low to the south and east of this region and over Korean/Japan. Overestimation of ozone is linked to systematic underestimation of CO particularly at rural sites and downwind of the main Chinese emission
regions. This is likely to be due to enhanced destruction of CO by OH. Overestimation of Asian ozone and its transport downwind implies that radiative forcing from this source may be overestimated. Model-observation discrepancies over Beijing do not appear to be due to emission controls linked to the Olympic Games in summer 2008. With regard to aerosols, most models reproduce the satellite-derived AOD patterns over eastern China. Our study nevertheless reveals an overestimation of ECLIPSE model-mean surface BC and sulphate aerosols in urban China in summer 2008. The effect of the short-term emission mitigation in Beijing is too weak to explain the differences between the models. Our results rather point to an overestimation of SO2 emissions, in particular, close to the surface in Chinese urban areas. However, we also identify a clear underestimation of aerosol concentrations over northern India, suggesting that the rapid recent growth of emissions in India, as well as their spatial extension, is underestimated in emission inventories. Model deficiencies in the representation of pollution accumulation due to the Indian monsoon may also be playing a role. Comparison with vertical aerosol lidar measurements highlights a general underestimation of scattering aerosols in the boundary layer associated with overestimation in the free troposphere pointing to modeled aerosol lifetimes that are too long. This is likely linked to a too strong vertical transport and/or insufficient deposition efficiency during transport or export from the boundary layer, rather than chemical processing (in the case of sulphate aerosols). Underestimation of sulphate in the boundary layer implies potentially large errors in simulated aerosol-cloud interactions, via impacts on boundary-layer clouds. This evaluation has important implications for accurate assessment of air pollutants on regional air quality and global climate based on global model calculations. Ideally, models should be run at higher resolution over source regions to better simulate
urban-rural pollutant gradients/chemical regimes, and also to better resolve pollutant processing and loss by wet deposition as well as vertical transport. Discrepancies in vertical distributions requires further quantification and improvement since this is a key factor in the determination of radiative forcing from short-lived pollutants
- …
