188,660 research outputs found

    Benchmarking Measures of Network Influence

    Get PDF
    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and kk-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that the exhaustive approach of the TKO score makes it an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the common network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the hunt for effective predictive measures

    Rugged Metropolis Sampling with Simultaneous Updating of Two Dynamical Variables

    Full text link
    The Rugged Metropolis (RM) algorithm is a biased updating scheme, which aims at directly hitting the most likely configurations in a rugged free energy landscape. Details of the one-variable (RM1_1) implementation of this algorithm are presented. This is followed by an extension to simultaneous updating of two dynamical variables (RM2_2). In a test with Met-Enkephalin in vacuum RM2_2 improves conventional Metropolis simulations by a factor of about four. Correlations between three or more dihedral angles appear to prevent larger improvements at low temperatures. We also investigate a multi-hit Metropolis scheme, which spends more CPU time on variables with large autocorrelation times.Comment: 8 pages, 5 figures. Revisions after referee reports. Additional simulations for temperatures down to 220

    Mean proton and alpha-particle reduced widths of the Porter-Thomas distribution and astrophysical applications

    Get PDF
    The Porter-Thomas distribution is a key prediction of the Gaussian orthogonal ensemble in random matrix theory. It is routinely used to provide a measure for the number of levels that are missing in a given resonance analysis. The Porter-Thomas distribution is also of crucial importance for estimates of thermonuclear reaction rates where the contributions of certain unobserved resonances to the total reaction rate need to be taken into account. In order to estimate such contributions by randomly sampling over the Porter-Thomas distribution, the mean value of the reduced width must be known. We present mean reduced width values for protons and α particles of compound nuclei in the A = 28–67 mass range. The values are extracted from charged-particle elastic scattering and reaction data that weremeasured at the riangle Universities Nuclear Laboratory over several decades. Our new values differ significantly from those previously reported that were based on a preliminary analysis of a smaller data set. As an example for the application of our results, we present new thermonuclear rates for the 40Ca(α,γ)44Ti reaction, which is important for 44Ti production in core-collapse supernovae, and compare with previously reported results.Peer ReviewedPostprint (published version

    Neutrino cooling rates due to 54,55,56^{54,55,56}Fe for presupernova evolution of massive stars

    Full text link
    Accurate estimate of neutrino energy loss rates are needed for the study of the late stages of the stellar evolution, in particular for cooling of neutron stars and white dwarfs. Proton-neutron quasi-particle random phase approximation (pn-QRPA) theory has recently being used for a microscopic calculation of stellar weak interaction rates of iron isotopes with success. Here I present the detailed calculation of neutrino and antineutrino cooling rates due to key iron isotopes in stellar matter using the pn-QRPA theory. The rates are calculated on a fine grid of temperature-density scale suitable for core-collapse simulators. The calculated rates are compared against earlier calculations. The neutrino cooling rates due to isotopes of iron are in overall good agreement with the rates calculated using the large-scale shell model. During the presupernova evolution of massive stars, from oxygen shell burning till around end of convective core silicon burning phases, the calculated neutrino cooling rates due to 54^{54}Fe are three to four times larger than the corresponding shell model rates. The Brink's hypothesis used in previous calculations can at times lead to erroneous results. The Brink's hypothesis assumes that the Gamow-Teller strength distributions for all excited states are the same. It is, however, shown by the present calculation that both the centroid and total strength for excited states differ appreciably from the ground state distribution. These changes in the strength distributions of thermally populated excited states can alter the total weak interaction rates rather significantly. The calculated antineutrino cooling rates, due to positron capture and β\beta-decay of iron isotopes, are orders of magnitude smaller than the corresponding neutrino cooling rates and can safely be neglected specially at low temperatures and high stellar densities.Comment: 25 pages, 9 figures, 6 table

    Chinese Internet AS-level Topology

    Full text link
    We present the first complete measurement of the Chinese Internet topology at the autonomous systems (AS) level based on traceroute data probed from servers of major ISPs in mainland China. We show that both the Chinese Internet AS graph and the global Internet AS graph can be accurately reproduced by the Positive-Feedback Preference (PFP) model with the same parameters. This result suggests that the Chinese Internet preserves well the topological characteristics of the global Internet. This is the first demonstration of the Internet's topological fractality, or self-similarity, performed at the level of topology evolution modeling.Comment: This paper is a preprint of a paper submitted to IEE Proceedings on Communications and is subject to Institution of Engineering and Technology Copyright. If accepted, the copy of record will be available at IET Digital Librar

    Simplified probabilistic model for maximum traffic load from weigh-in-motion data

    Get PDF
    This is an Accepted Manuscript of an article published by Taylor & Francis Group in Structure and infrastructure engineering on 2016, available online at: http://www.tandfonline.com/10.1080/15732479.2016.1164728This paper reviews the simplified procedure proposed by Ghosn and Sivakumar to model the maximum expected traffic load effect on highway bridges and illustrates the methodology using a set of Weigh-In-Motion (WIM) data collected on one site in the U.S.A. The paper compares different approaches for implementing the procedure and explores the effects of limitations in the site-specific data on the projected maximum live load effect for different bridge service lives. A sensitivity analysis is carried out to study changes in the final results due to variations in the parameters that define the characteristics of the WIM data and those used in the calculation of the maximum load effect. The procedure is also implemented on a set of WIM data collected in Slovenia to study the maximum load effect on existing Slovenian highway bridges and how the projected results compare to the values obtained using advanced simulation algorithms and those specified in the Eurocode of actions.Peer ReviewedPostprint (author's final draft
    corecore