1,318 research outputs found

    The Network Survival Method for Estimating Adult Mortality: Evidence From a Survey Experiment in Rwanda.

    Get PDF
    Adult death rates are a critical indicator of population health and well-being. Wealthy countries have high-quality vital registration systems, but poor countries lack this infrastructure and must rely on estimates that are often problematic. In this article, we introduce the network survival method, a new approach for estimating adult death rates. We derive the precise conditions under which it produces consistent and unbiased estimates. Further, we develop an analytical framework for sensitivity analysis. To assess the performance of the network survival method in a realistic setting, we conducted a nationally representative survey experiment in Rwanda (n = 4,669). Network survival estimates were similar to estimates from other methods, even though the network survival estimates were made with substantially smaller samples and are based entirely on data from Rwanda, with no need for model life tables or pooling of data from other countries. Our analytic results demonstrate that the network survival method has attractive properties, and our empirical results show that this method can be used in countries where reliable estimates of adult death rates are sorely needed

    Predictability in an unpredictable artificial cultural market

    Full text link
    In social, economic and cultural situations in which the decisions of individuals are influenced directly by the decisions of others, there appears to be an inherently high level of ex ante unpredictability. In cultural markets such as films, songs and books, well-informed experts routinely make predictions which turn out to be incorrect. We examine the extent to which the existence of social influence may, somewhat paradoxically, increase the extent to which winners can be identified at a very early stage in the process. Once the process of choice has begun, only a very small number of decisions may be necessary to give a reasonable prospect of being able to identify the eventual winner. We illustrate this by an analysis of the music download experiments of Salganik et.al. (2006). We derive a rule for early identification of the eventual winner. Although not perfect, it gives considerable practical success. We validate the rule by applying it to similar data not used in the process of constructing the rule

    The Spontaneous Emergence of Social Influence in Online Systems

    Full text link
    Social influence drives both offline and online human behaviour. It pervades cultural markets, and manifests itself in the adoption of scientific and technical innovations as well as the spread of social practices. Prior empirical work on the diffusion of innovations in spatial regions or social networks has largely focused on the spread of one particular technology among a subset of all potential adopters. It has also been difficult to determine whether the observed collective behaviour is driven by natural influence processes, or whether it follows external signals such as media or marketing campaigns. Here, we choose an online context that allows us to study social influence processes by tracking the popularity of a complete set of applications installed by the user population of a social networking site, thus capturing the behaviour of all individuals who can influence each other in this context. By extending standard fluctuation scaling methods, we analyse the collective behaviour induced by 100 million application installations, and show that two distinct regimes of behaviour emerge in the system. Once applications cross a particular threshold of popularity, social influence processes induce highly correlated adoption behaviour among the users, which propels some of the applications to extraordinary levels of popularity. Below this threshold, the collective effect of social influence appears to vanish almost entirely in a manner that has not been observed in the offline world. Our results demonstrate that even when external signals are absent, social influence can spontaneously assume an on-off nature in a digital environment. It remains to be seen whether a similar outcome could be observed in the offline world if equivalent experimental conditions could be replicated

    Breaking a one-dimensional chain: fracture in 1 + 1 dimensions

    Full text link
    The breaking rate of an atomic chain stretched at zero temperature by a constant force can be calculated in a quasiclassical approximation by finding the localized solutions ("bounces") of the equations of classical dynamics in imaginary time. We show that this theory is related to the critical cracks of stressed solids, because the world lines of the atoms in the chain form a two-dimensional crystal, and the bounce is a crack configuration in (unstable) mechanical equilibrium. Thus the tunneling time, Action, and breaking rate in the limit of small forces are determined by the classical results of Griffith. For the limit of large forces we give an exact bounce solution that describes the quantum fracture and classical crack close to the limit of mechanical stability. This limit can be viewed as a critical phenomenon for which we establish a Levanyuk-Ginzburg criterion of weakness of fluctuations, and propose a scaling argument for the critical regime. The post-tunneling dynamics is understood by the analytic continuation of the bounce solutions to real time.Comment: 15 pages, 5 figure

    Controlling Fairness and Bias in Dynamic Learning-to-Rank

    Full text link
    Rankings are the primary interface through which many online platforms match users to items (e.g. news, products, music, video). In these two-sided markets, not only the users draw utility from the rankings, but the rankings also determine the utility (e.g. exposure, revenue) for the item providers (e.g. publishers, sellers, artists, studios). It has already been noted that myopically optimizing utility to the users, as done by virtually all learning-to-rank algorithms, can be unfair to the item providers. We, therefore, present a learning-to-rank approach for explicitly enforcing merit-based fairness guarantees to groups of items (e.g. articles by the same publisher, tracks by the same artist). In particular, we propose a learning algorithm that ensures notions of amortized group fairness, while simultaneously learning the ranking function from implicit feedback data. The algorithm takes the form of a controller that integrates unbiased estimators for both fairness and utility, dynamically adapting both as more data becomes available. In addition to its rigorous theoretical foundation and convergence guarantees, we find empirically that the algorithm is highly practical and robust.Comment: First two authors contributed equally. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval 202
    • …
    corecore