136 research outputs found

    How a "Hit" is Born: The Emergence of Popularity from the Dynamics of Collective Choice

    Full text link
    In recent times there has been a surge of interest in seeking out patterns in the aggregate behavior of socio-economic systems. One such domain is the emergence of statistical regularities in the evolution of collective choice from individual behavior. This is manifested in the sudden emergence of popularity or "success" of certain ideas or products, compared to their numerous, often very similar, competitors. In this paper, we present an empirical study of a wide range of popularity distributions, spanning from scientific paper citations to movie gross income. Our results show that in the majority of cases, the distribution follows a log-normal form, suggesting that multiplicative stochastic processes are the basis for emergence of popular entities. This suggests the existence of some general principles of complex organization leading to the emergence of popularity. We discuss the theoretical principles needed to explain this socio-economic phenomenon, and present a model for collective behavior that exhibits bimodality, which has been observed in certain empirical popularity distributions.Comment: 17 pages, 14 figures, A version of the work is published in Econophysics and Sociophysics: Trends and Perspectives, (eds.) Bikas K. Chakrabarti, Anirban Chakraborti, Arnab Chatterjee; Wiley-VCH, Berlin (2006); Chapter-15, pages: 417-44

    Polling systems with regularly varying service and/or switchover times

    Get PDF
    We consider a polling system consisting of K queues and a single server S who visits the queues in a cyclic order. The polling discipline in each queue is the gated or exhaustive service discipline. We investigate the tail behaviour of the waiting time distributions at the various queues in the case that at least one of the service time or switchover time distributions has a regularly varying tail

    Cryptographic Randomized Response Techniques

    Full text link
    We develop cryptographically secure techniques to guarantee unconditional privacy for respondents to polls. Our constructions are efficient and practical, and are shown not to allow cheating respondents to affect the ``tally'' by more than their own vote -- which will be given the exact same weight as that of other respondents. We demonstrate solutions to this problem based on both traditional cryptographic techniques and quantum cryptography.Comment: 21 page

    Queues with regular variation

    Get PDF
    X+173hlm.;24c

    Critical Market Crashes

    Full text link
    This review is a partial synthesis of the book ``Why stock market crash'' (Princeton University Press, January 2003), which presents a general theory of financial crashes and of stock market instabilities that his co-workers and the author have developed over the past seven years. The study of the frequency distribution of drawdowns, or runs of successive losses shows that large financial crashes are ``outliers'': they form a class of their own as can be seen from their statistical signatures. If large financial crashes are ``outliers'', they are special and thus require a special explanation, a specific model, a theory of their own. In addition, their special properties may perhaps be used for their prediction. The main mechanisms leading to positive feedbacks, i.e., self-reinforcement, such as imitative behavior and herding between investors are reviewed with many references provided to the relevant literature outside the confine of Physics. Positive feedbacks provide the fuel for the development of speculative bubbles, preparing the instability for a major crash. We demonstrate several detailed mathematical models of speculative bubbles and crashes. The most important message is the discovery of robust and universal signatures of the approach to crashes. These precursory patterns have been documented for essentially all crashes on developed as well as emergent stock markets, on currency markets, on company stocks, and so on. The concept of an ``anti-bubble'' is also summarized, with two forward predictions on the Japanese stock market starting in 1999 and on the USA stock market still running. We conclude by presenting our view of the organization of financial markets.Comment: Latex 89 pages and 38 figures, in press in Physics Report

    Rules of Thumb for Information Acquisition from Large and Redundant Data

    Full text link
    We develop an abstract model of information acquisition from redundant data. We assume a random sampling process from data which provide information with bias and are interested in the fraction of information we expect to learn as function of (i) the sampled fraction (recall) and (ii) varying bias of information (redundancy distributions). We develop two rules of thumb with varying robustness. We first show that, when information bias follows a Zipf distribution, the 80-20 rule or Pareto principle does surprisingly not hold, and we rather expect to learn less than 40% of the information when randomly sampling 20% of the overall data. We then analytically prove that for large data sets, randomized sampling from power-law distributions leads to "truncated distributions" with the same power-law exponent. This second rule is very robust and also holds for distributions that deviate substantially from a strict power law. We further give one particular family of powerlaw functions that remain completely invariant under sampling. Finally, we validate our model with two large Web data sets: link distributions to domains and tag distributions on delicious.com.Comment: 40 pages, 17 figures; for details see the project page: http://uniquerecall.co

    Measuring productivity dispersion: a parametric approach using the LĂ©vy alpha-stable distribution

    Get PDF
    It is well-known that value added per worker is extremely heterogeneous among firms, but relatively little has been done to characterize this heterogeneity more precisely. Here we show that the distribution of value-added per worker exhibits heavy tails, a very large support, and consistently features a proportion of negative values, which prevents log transformation. We propose to model the distribution of value added per worker using the four parameter LĂ©vy stable distribution, a natural candidate deriving from the Generalised Central Limit Theorem, and we show that it is a better fit than key alternatives. Fitting a distribution allows us to capture dispersion through the tail exponent and scale parameters separately. We show that these parametric measures of dispersion are at least as useful as interquantile ratios, through case studies on the evolution of dispersion in recent years and the correlation between dispersion and intangible capital intensity

    Essays on modeling and analysis of dynamic sociotechnical systems

    Get PDF
    A sociotechnical system is a collection of humans and algorithms that interact under the partial supervision of a decentralized controller. These systems often display in- tricate dynamics and can be characterized by their unique emergent behavior. In this work, we describe, analyze, and model aspects of three distinct classes of sociotech- nical systems: financial markets, social media platforms, and elections. Though our work is diverse in subject matter content, it is unified though the study of evolution- and adaptation-driven change in social systems and the development of methods used to infer this change. We first analyze evolutionary financial market microstructure dynamics in the context of an agent-based model (ABM). The ABM’s matching engine implements a frequent batch auction, a recently-developed type of price-discovery mechanism. We subject simple agents to evolutionary pressure using a variety of selection mech- anisms, demonstrating that quantile-based selection mechanisms are associated with lower market-wide volatility. We then evolve deep neural networks in the ABM and demonstrate that elite individuals are profitable in backtesting on real foreign ex- change data, even though their fitness had never been evaluated on any real financial data during evolution. We then turn to the extraction of multi-timescale functional signals from large panels of timeseries generated by sociotechnical systems. We introduce the discrete shocklet transform (DST) and associated similarity search algorithm, the shocklet transform and ranking (STAR) algorithm, to accomplish this task. We empirically demonstrate the STAR algorithm’s invariance to quantitative functional parameteri- zation and provide use case examples. The STAR algorithm compares favorably with Twitter’s anomaly detection algorithm on a feature extraction task. We close by using STAR to automatically construct a narrative timeline of societally-significant events using a panel of Twitter word usage timeseries. Finally, we model strategic interactions between the foreign intelligence service (Red team) of a country that is attempting to interfere with an election occurring in another country, and the domestic intelligence service of the country in which the election is taking place (Blue team). We derive subgame-perfect Nash equilibrium strategies for both Red and Blue and demonstrate the emergence of arms race inter- ference dynamics when either player has “all-or-nothing” attitudes about the result of the interference episode. We then confront our model with data from the 2016 U.S. presidential election contest, in which Russian military intelligence interfered. We demonstrate that our model captures the qualitative dynamics of this interference for most of the time under stud
    • …
    corecore