7,132 research outputs found

    Fixation in Evolutionary Games under Non-Vanishing Selection

    Get PDF
    One of the most striking effect of fluctuations in evolutionary game theory is the possibility for mutants to fixate (take over) an entire population. Here, we generalize a recent WKB-based theory to study fixation in evolutionary games under non-vanishing selection, and investigate the relation between selection intensity w and demographic (random) fluctuations. This allows the accurate treatment of large fluctuations and yields the probability and mean times of fixation beyond the weak selection limit. The power of the theory is demonstrated on prototypical models of cooperation dilemmas with multiple absorbing states. Our predictions compare excellently with numerical simulations and, for finite w, significantly improve over those of the Fokker-Planck approximation.Comment: 4 figures, to appear in EPL (Europhysics Letters

    Learning from power system data stream: phasor-detective approach

    Full text link
    Assuming access to synchronized stream of Phasor Measurement Unit (PMU) data over a significant portion of a power system interconnect, say controlled by an Independent System Operator (ISO), what can you extract about past, current and future state of the system? We have focused on answering this practical questions pragmatically - empowered with nothing but standard tools of data analysis, such as PCA, filtering and cross-correlation analysis. Quite surprisingly we have found that even during the quiet "no significant events" period this standard set of statistical tools allows the "phasor-detective" to extract from the data important hidden anomalies, such as problematic control loops at loads and wind farms, and mildly malfunctioning assets, such as transformers and generators. We also discuss and sketch future challenges a mature phasor-detective can possibly tackle by adding machine learning and physics modeling sophistication to the basic approach

    Prescriptions on antiproton cross section data for precise theoretical antiproton flux predictions

    Get PDF
    After the breakthrough from the satellite-borne PAMELA detector, the flux of cosmic-ray (CR) antiprotons has been provided with unprecedented accuracy by AMS-02 on the International Space Station. Its data spans an energy range from below 1 GeV up to 400 GeV and most of the data points contain errors below the amazing level of 5%. The bulk of the antiproton flux is expected to be produced by the scatterings of CR protons and helium off interstellar hydrogen and helium atoms at rest. The modeling of these interactions, which requires the relevant production cross sections, induces an uncertainty in the determination of the antiproton source term that can even exceed the uncertainties in the CR pˉ\bar{p} data itself. The aim of the present analysis is to determine the uncertainty required for p+ppˉ+Xp+p\rightarrow \bar{p} + X cross section measurements such that the induced uncertainties on the pˉ\bar{p} flux are at the same level. Our results are discussed both in the center-of-mass reference frame, suitable for collider experiments, and in the laboratory frame, as occurring in the Galaxy. We find that cross section data should be collected with accuracy better that few percent with proton beams from 10 GeV to 6 TeV and a pseudorapidity η\eta ranging from 2 to almost 8 or, alternatively, with pTp_T from 0.04 to 2 GeV and xRx_R from 0.02 to 0.7. Similar considerations hold for the ppHe production channel. The present collection of data is far from these requirements. Nevertheless, they could, in principle, be reached by fixed target experiments with beam energies in the reach of CERN accelerators.Comment: 15 pages, 13 figures, matches published versio

    Production cross sections of cosmic antiprotons in the light of new data from the NA61 and LHCb experiments

    Get PDF
    The cosmic-ray flux of antiprotons is measured with high precision by the space-borne particle spectrometers AMS-02.Its interpretation requires a correct description of the dominant production process for antiprotons in our Galaxy, namely, the interaction of cosmic-ray proton and helium with the interstellar medium. In the light of new cross section measurements by the NA61 experiment of p+ppˉ+Xp + p \rightarrow \bar{p} + X and the first ever measurement of p+Hepˉ+Xp + \mathrm{He} \rightarrow \bar{p} + X by the LHCb experiment, we update the parametrization of proton-proton and proton-nucleon cross sections.We find that the LHCb ppHe data constrain a shape for the cross section at high energies and show for the first time how well the rescaling from the pppp channel applies to a helium target. By using pppp, ppHe and ppC data we estimate the uncertainty on the Lorentz invariant cross section for p+Hepˉ+Xp + \mathrm{He} \rightarrow \bar{p} + X. We use these new cross sections to compute the source term for all the production channels, considering also nuclei heavier than He both in cosmic rays and the interstellar medium. The uncertainties on the total source term are at the level of ±20\pm20% and slightly increase below antiproton energies of 5 GeV. This uncertainty is dominated by the p+ppˉ+Xp+p \rightarrow \bar{p} + X cross section, which translates into all channels since we derive them using the pppp cross sections. The cross sections to calculate the source spectra from all relevant cosmic-ray isotopes are provided in the Supplemental Material. We finally quantify the necessity of new data on antiproton production cross sections, and pin down the kinematic parameter space which should be covered by future data.Comment: 16 pages, 11 figures, matches published versio

    Entrograms and coarse graining of dynamics on complex networks

    Full text link
    Using an information theoretic point of view, we investigate how a dynamics acting on a network can be coarse grained through the use of graph partitions. Specifically, we are interested in how aggregating the state space of a Markov process according to a partition impacts on the thus obtained lower-dimensional dynamics. We highlight that for a dynamics on a particular graph there may be multiple coarse grained descriptions that capture different, incomparable features of the original process. For instance, a coarse graining induced by one partition may be commensurate with a time-scale separation in the dynamics, while another coarse graining may correspond to a different lower-dimensional dynamics that preserves the Markov property of the original process. Taking inspiration from the literature of Computational Mechanics, we find that a convenient tool to summarise and visualise such dynamical properties of a coarse grained model (partition) is the entrogram. The entrogram gathers certain information-theoretic measures, which quantify how information flows across time steps. These information theoretic quantities include the entropy rate, as well as a measure for the memory contained in the process, i.e., how well the dynamics can be approximated by a first order Markov process. We use the entrogram to investigate how specific macro-scale connection patterns in the state-space transition graph of the original dynamics result in desirable properties of coarse grained descriptions. We thereby provide a fresh perspective on the interplay between structure and dynamics in networks, and the process of partitioning from an information theoretic perspective. We focus on networks that may be approximated by both a core-periphery or a clustered organization, and highlight that each of these coarse grained descriptions can capture different aspects of a Markov process acting on the network.Comment: 17 pages, 6 figue

    Projections from Subvarieties

    Full text link
    Let XPNX\subset P^N be an n-dimensional connected projective submanifold of projective space. Let p:PNPNq1p : P^N\to P^{N-q-1} denote the projection from a linear PqPNP^q\subset P^N. Assuming that X⊄PqX\not\subset P^q we have the induced rational mapping ψ:=pX:XPNq1\psi:=p_X: X\to P^{N-q-1}. This article started as an attempt to understand the structure of this mapping when ψ\psi has a lower dimensional image. In this case of necessity we have Y:=XPqY := X\cap P^q is nonempty. We have in this article studied a closely related question, which includes many special cases including the case when the center of the projection \pn q is contained in XX. PROBLEM. Let YY be a proper connected k-dimensional projective submanifold of an nn-dimensional projective manifold XX. Assume that k>0k>0. Let LL be a very ample line bundle on XX such that LIY L\otimes I_Y is spanned by global sections, where IYI_Y denotes the ideal sheaf of YY in XX. Describe the structure of (X,Y,L)(X,Y,L) under the additional assumption that the image of XX under the mapping ψ\psi associated to LIY| L\otimes I_Y| is lower dimensional

    efficiency and evolution of R&D Networks.

    Get PDF
    This work introduces a new model to investigate the efficiency and evolution of networks of firms exchanging knowledge in R&D partnerships. We first examine the efficiency of a given network structure from the point of view of maximizing total profits in the industry. We show that the efficient network structure depends on the marginal cost of collaboration. When the marginal cost is low, the complete graph is efficient. However, a high marginal cost implies that the efficient network is sparser and has a core-periphery structure. Next, we examine the evolution of the network structure when the decision on collaborating partners is decentralized. We show the existence of multiple equilibrium structures which are in general inefficient. This is due to (i) the path dependent character of the partner selection process, (ii) the presence of knowledge externalities and (iii) the presence of severance costs involved in link deletion. Finally, we study the properties of the emerging equilibrium networks and we show that they are coherent with the stylized facts on R&D networks.R&D networks;technology spillovers;network efficiency;network formation;
    corecore