41,162 research outputs found

    A cluster expansion approach to exponential random graph models

    Full text link
    The exponential family of random graphs is among the most widely-studied network models. We show that any exponential random graph model may alternatively be viewed as a lattice gas model with a finite Banach space norm. The system may then be treated by cluster expansion methods from statistical mechanics. In particular, we derive a convergent power series expansion for the limiting free energy in the case of small parameters. Since the free energy is the generating function for the expectations of other random variables, this characterizes the structure and behavior of the limiting network in this parameter region.Comment: 15 pages, 1 figur

    Alternative statistical-mechanical descriptions of decaying two-dimensional turbulence in terms of "patches" and "points"

    Get PDF
    Numerical and analytical studies of decaying, two-dimensional (2D) Navier-Stokes (NS) turbulence at high Reynolds numbers are reported. The effort is to determine computable distinctions between two different formulations of maximum entropy predictions for the decayed, late-time state. Both formulations define an entropy through a somewhat ad hoc discretization of vorticity to the "particles" of which statistical mechanical methods are employed to define an entropy, before passing to a mean-field limit. In one case, the particles are delta-function parallel "line" vortices ("points" in two dimensions), and in the other, they are finite-area, mutually-exclusive convected "patches" of vorticity which in the limit of zero area become "points." We use time-dependent, spectral-method direct numerical simulation of the Navier-Stokes equations to see if initial conditions which should relax to different late-time states under the two formulations actually do so.Comment: 21 pages, 24 figures: submitted to "Physics of Fluids

    Ground States for Exponential Random Graphs

    Full text link
    We propose a perturbative method to estimate the normalization constant in exponential random graph models as the weighting parameters approach infinity. As an application, we give evidence of discontinuity in natural parametrization along the critical directions of the edge-triangle model.Comment: 12 pages, 3 figures, 1 tabl

    Incentivizing High Quality Crowdwork

    Full text link
    We study the causal effects of financial incentives on the quality of crowdwork. We focus on performance-based payments (PBPs), bonus payments awarded to workers for producing high quality work. We design and run randomized behavioral experiments on the popular crowdsourcing platform Amazon Mechanical Turk with the goal of understanding when, where, and why PBPs help, identifying properties of the payment, payment structure, and the task itself that make them most effective. We provide examples of tasks for which PBPs do improve quality. For such tasks, the effectiveness of PBPs is not too sensitive to the threshold for quality required to receive the bonus, while the magnitude of the bonus must be large enough to make the reward salient. We also present examples of tasks for which PBPs do not improve quality. Our results suggest that for PBPs to improve quality, the task must be effort-responsive: the task must allow workers to produce higher quality work by exerting more effort. We also give a simple method to determine if a task is effort-responsive a priori. Furthermore, our experiments suggest that all payments on Mechanical Turk are, to some degree, implicitly performance-based in that workers believe their work may be rejected if their performance is sufficiently poor. Finally, we propose a new model of worker behavior that extends the standard principal-agent model from economics to include a worker's subjective beliefs about his likelihood of being paid, and show that the predictions of this model are in line with our experimental findings. This model may be useful as a foundation for theoretical studies of incentives in crowdsourcing markets.Comment: This is a preprint of an Article accepted for publication in WWW \c{opyright} 2015 International World Wide Web Conference Committe

    Flavor-twisted boundary condition for simulations of quantum many-body systems

    Full text link
    We present an approximative simulation method for quantum many-body systems based on coarse graining the space of the momentum transferred between interacting particles, which leads to effective Hamiltonians of reduced size with the flavor-twisted boundary condition. A rapid, accurate, and fast convergent computation of the ground-state energy is demonstrated on the spin-1/2 quantum antiferromagnet of any dimension by employing only two sites. The method is expected to be useful for future simulations and quick estimates on other strongly correlated systems.Comment: 6 pages, 2 figure

    Momentum Kick Model Description of the Ridge in (Delta-phi)-(Delta eta) Correlation in pp Collisions at 7 TeV

    Full text link
    The near-side ridge structure in the (Delta phi)-(Delta eta) correlation observed by the CMS Collaboration for pp collisions at 7 TeV at LHC can be explained by the momentum kick model in which the ridge particles are medium partons that suffer a collision with the jet and acquire a momentum kick along the jet direction. Similar to the early medium parton momentum distribution obtained in previous analysis for nucleus-nucleus collisions at 0.2 TeV, the early medium parton momentum distribution in pp collisions at 7 TeV exhibits a rapidity plateau as arising from particle production in a flux tube.Comment: Talk presented at Workshop on High-pT Probes of High-Density QCD at the LHC, Palaiseau, May 30-June2, 201

    Update or Wait: How to Keep Your Data Fresh

    Full text link
    In this work, we study how to optimally manage the freshness of information updates sent from a source node to a destination via a channel. A proper metric for data freshness at the destination is the age-of-information, or simply age, which is defined as how old the freshest received update is since the moment that this update was generated at the source node (e.g., a sensor). A reasonable update policy is the zero-wait policy, i.e., the source node submits a fresh update once the previous update is delivered and the channel becomes free, which achieves the maximum throughput and the minimum delay. Surprisingly, this zero-wait policy does not always minimize the age. This counter-intuitive phenomenon motivates us to study how to optimally control information updates to keep the data fresh and to understand when the zero-wait policy is optimal. We introduce a general age penalty function to characterize the level of dissatisfaction on data staleness and formulate the average age penalty minimization problem as a constrained semi-Markov decision problem (SMDP) with an uncountable state space. We develop efficient algorithms to find the optimal update policy among all causal policies, and establish sufficient and necessary conditions for the optimality of the zero-wait policy. Our investigation shows that the zero-wait policy is far from the optimum if (i) the age penalty function grows quickly with respect to the age, (ii) the packet transmission times over the channel are positively correlated over time, or (iii) the packet transmission times are highly random (e.g., following a heavy-tail distribution)
    • …
    corecore