10,056 research outputs found

    Pure Nash Equilibria: Hard and Easy Games

    Full text link
    We investigate complexity issues related to pure Nash equilibria of strategic games. We show that, even in very restrictive settings, determining whether a game has a pure Nash Equilibrium is NP-hard, while deciding whether a game has a strong Nash equilibrium is SigmaP2-complete. We then study practically relevant restrictions that lower the complexity. In particular, we are interested in quantitative and qualitative restrictions of the way each players payoff depends on moves of other players. We say that a game has small neighborhood if the utility function for each player depends only on (the actions of) a logarithmically small number of other players. The dependency structure of a game G can be expressed by a graph DG(G) or by a hypergraph H(G). By relating Nash equilibrium problems to constraint satisfaction problems (CSPs), we show that if G has small neighborhood and if H(G) has bounded hypertree width (or if DG(G) has bounded treewidth), then finding pure Nash and Pareto equilibria is feasible in polynomial time. If the game is graphical, then these problems are LOGCFL-complete and thus in the class NC2 of highly parallelizable problems

    Impact of temperature dependence of the energy loss on jet quenching observables

    Get PDF
    The quenching of jets (particles with pT>>T,ΛQCDp_T>>T, \Lambda_{QCD}) in ultra-relativistic heavy-ion collisions has been one of the main prediction and discovery at RHIC. We have studied, by a simple jet quenching modeling, the correlation between different observables like the nuclear modification factor \Rapt, the elliptic flow v2v_2 and the ratio of quark to gluon suppression RAA(quark)/RAA(gluon)R_{AA}(quark)/R_{AA}(gluon). We show that the relation among these observables is strongly affected by the temperature dependence of the energy loss. In particular the large v2v_2 and and the nearly equal \Rapt of quarks and gluons can be accounted for only if the energy loss occurs mainly around the temperature TcT_c and the flavour conversion is significant.Finally we point out that the efficency in the conversion of the space eccentricity into the momentum one (v2v_2) results to be quite smaller respect to the one coming from elastic scatterings in a fluid with a viscosity to entropy density ratio 4πη/s=14\pi\eta/s=1.Comment: 7 pages, 8 figures, Workshop WISH 201

    A novel stepwise micro-TESE approach in non obstructive azoospermia

    Get PDF
    Background: The purpose of the study was to investigate whether micro-TESE can improve sperm retrieval rate (SRR) compared to conventional single TESE biopsy on the same testicle or to contralateral multiple TESE, by employing a novel stepwise micro-TESE approach in a population of poor prognosis patients with non-obstructive azoospermia (NOA). Methods: Sixty-four poor prognosis NOA men undergoing surgical testicular sperm retrieval for ICSI, from March 2007 to April 2013, were included in this study. Patients inclusion criteria were a) previous unsuccessful TESE, b) unfavorable histology (SCOS, MA, sclerahyalinosis), c) Klinefelter syndrome. We employed a stepwise micro-TESE consisting three-steps: 1) single conventional TESE biopsy; 2) micro-TESE on the same testis; 3) contralateral multiple TESE. Results: SRR was 28.1 % (18/64). Sperm was obtained in both the initial single conventional TESE and in the following micro-TESE. The positive or negative sperm retrieval was further confirmed by a contralateral multiple TESE, when performed. No significant pre-operative predictors of sperm retrieval, including patients’ age, previous negative TESE or serological markers (LH, FSH, inhibin B), were observed at univariate or multivariate analysis. Micro-TESE (step 2) did not improve sperm retrieval as compared to single TESE biopsy on the same testicle (step 1) or multiple contralateral TESE (step 3). Conclusions: Stepwise micro-TESE could represent an optimal approach for sperm retrieval in NOA men. In our view, it should be offered to NOA patients in order to gradually increase surgical invasiveness, when necessary. Stepwise micro-TESE might also reduce the costs, time and efforts involved in surgery

    Introduction to the Special Issue on Liminal Hotspots

    Get PDF
    This article introduces a special issue of Theory and Psychology on liminal hotspots. A liminal hotspot is an occasion during which people feel they are caught suspended in the circumstances of a transition that has become permanent. The liminal experiences of ambiguity and uncertainty that are typically at play in transitional circumstances acquire an enduring quality that can be described as a “hotspot”. Liminal hotspots are characterized by dynamics of paradox, paralysis, and polarization, but they also intensify the potential for pattern shift. The origins of the concept are described followed by an overview of the contributions to this special issue

    New technique to measure the cavity defects of Fabry-Perot interferometers

    Get PDF
    (Abridged): We define and test a new technique to accurately measure the cavity defects of air-spaced FPIs, including distortions due to the spectral tuning process typical of astronomical observations. We further develop a correction technique to maintain the shape of the cavity as constant as possible during the spectral scan. These are necessary steps to optimize the spectral transmission profile of a two-dimensional spectrograph using one or more FPIs. We devise a generalization of the techniques developed for the so-called phase-shifting interferometry to the case of FPIs. The technique is applicable to any FPI that can be tuned via changing the cavity spacing (zz-axis), and can be used for any etalon regardless of the coating' reflectivity. The major strength of our method is the ability to fully characterize the cavity during a spectral scan, allowing for the determination of scan-dependent modifications of the plates. As a test, we have applied this technique to three 50 mm diameter interferometers, with cavity gaps ranging between 600 micron and 3 mm, coated for use in the visible range. We obtain accurate and reliable measures of the cavity defects of air-spaced FPIs, and of their evolution during the entire spectral scan. Our main, and unexpected, result is that the relative tilt between the two FPI plates varies significantly during the spectral scan, and can dominate the cavity defects; in particular, we observe that the tilt component at the extremes of the scan is sensibly larger than at the center of the scan. Exploiting the capability of the electronic controllers to set the reference plane at any given spectral step, we develop a correction technique that allows the minimization of the tilt during a complete spectral scan. The correction remains highly stable over long periods, well beyond the typical duration of astronomical observations.Comment: 15 pages, 20+ figures, accepted for publication in A&A. Two additional movies are available in the online version of the pape

    Performance Bounds for Parameter Estimation under Misspecified Models: Fundamental findings and applications

    Full text link
    Inferring information from a set of acquired data is the main objective of any signal processing (SP) method. In particular, the common problem of estimating the value of a vector of parameters from a set of noisy measurements is at the core of a plethora of scientific and technological advances in the last decades; for example, wireless communications, radar and sonar, biomedicine, image processing, and seismology, just to name a few. Developing an estimation algorithm often begins by assuming a statistical model for the measured data, i.e. a probability density function (pdf) which if correct, fully characterizes the behaviour of the collected data/measurements. Experience with real data, however, often exposes the limitations of any assumed data model since modelling errors at some level are always present. Consequently, the true data model and the model assumed to derive the estimation algorithm could differ. When this happens, the model is said to be mismatched or misspecified. Therefore, understanding the possible performance loss or regret that an estimation algorithm could experience under model misspecification is of crucial importance for any SP practitioner. Further, understanding the limits on the performance of any estimator subject to model misspecification is of practical interest. Motivated by the widespread and practical need to assess the performance of a mismatched estimator, the goal of this paper is to help to bring attention to the main theoretical findings on estimation theory, and in particular on lower bounds under model misspecification, that have been published in the statistical and econometrical literature in the last fifty years. Secondly, some applications are discussed to illustrate the broad range of areas and problems to which this framework extends, and consequently the numerous opportunities available for SP researchers.Comment: To appear in the IEEE Signal Processing Magazin
    • …
    corecore