27 research outputs found

    Social scale and collective computation: Does information processing limit rate of growth in scale?

    Get PDF
    Collective computation is the process by which groups store and share information to arrive at decisions for collective behavior. How societies engage in effective collective computation depends partly on their scale. Social arrangements and technologies that work for small- and mid-scale societies are inadequate for dealing effectively with the much larger communication loads that societies face during the growth in scale that is a hallmark of the Holocene. An important bottleneck for growth may be the development of systems for persistent recording of information (writing), and perhaps also the abstraction of money for generalizing exchange mechanisms. Building on Shin et al., we identify a Scale Threshold to be crossed before societies can develop such systems, and an Information Threshold which, once crossed, allows more or less unlimited growth in scale. We introduce several additional articles in this special issue that elaborate or evaluate this Thresholds Model for particular types of societies or times and places in the world.1 Introduction 2 Seshat: The Global History Databank 2.1 Quantitative historical analysis uncovers a single dimension of complexity that structures global variation in human social organization 2.2 Scale and information-processing thresholds in Holocene social evolution 2.3 Evolution of collective computational abilities of (pre)historic societies 3 Empirical Fluctuation, or Stochastic Law? 4 Opening the Discussion on Collective Computation: Historical Survey and Introduction to the Case Studies 4.1 Marcus Hamilton: Collective computation and the emergence of hunter-gatherer small-worlds 4.2 Laura Ellyson: Applying Gregory Johnson’s concepts of scalar stress to scale and Information Thresholds in Holocene social evolution 4.3 Johannes Müller et al.: Tripolye mega-sites: “Collective computational abilities” of prehistoric proto-urban societies? 4.4 Steven Wernke: Explosive expansion, sociotechnical diversity, and fragile sovereignty in the domain of the Inka 4.5 Gary Feinman and David Carballo: Communication, computation, and governance: A multiscalar vantage on the prehispanic Mesoamerican World 4.6 Ian Morris: Scale, information-processing, and complementarities in Old-World Axial-Age societies 5 Conclusion 6 Postscript: the Second Social Media Revolutio

    L’effet de stimuli externes et des variables individuelles sur le traitement initial de l’information par le consommateur

    Get PDF
    The neo-classical economic theory of the consumer behavior defines a utility function in terms of a global number of characteristics a product process or the result of several purchase activities. Every consumer can be in the context of aninefficient consumption function if the choice of the product bought doesn't fit with the state of preferences for the characteristics of this product. Thus, an efficient consumption function requires an adequate level of information that the mechanics of the market performance doesn't guarantee as well as for the consumption function as for the production function.In this paper, the consumer information processing limit is exposed showing an important gap between the preferred and memorized information by the consumer during the decision process. The concept of pre-processed information proposed could possibly improve the efficiency of the consumption function

    Quantifying Information Overload in Social Media and its Impact on Social Contagions

    Full text link
    Information overload has become an ubiquitous problem in modern society. Social media users and microbloggers receive an endless flow of information, often at a rate far higher than their cognitive abilities to process the information. In this paper, we conduct a large scale quantitative study of information overload and evaluate its impact on information dissemination in the Twitter social media site. We model social media users as information processing systems that queue incoming information according to some policies, process information from the queue at some unknown rates and decide to forward some of the incoming information to other users. We show how timestamped data about tweets received and forwarded by users can be used to uncover key properties of their queueing policies and estimate their information processing rates and limits. Such an understanding of users' information processing behaviors allows us to infer whether and to what extent users suffer from information overload. Our analysis provides empirical evidence of information processing limits for social media users and the prevalence of information overloading. The most active and popular social media users are often the ones that are overloaded. Moreover, we find that the rate at which users receive information impacts their processing behavior, including how they prioritize information from different sources, how much information they process, and how quickly they process information. Finally, the susceptibility of a social media user to social contagions depends crucially on the rate at which she receives information. An exposure to a piece of information, be it an idea, a convention or a product, is much less effective for users that receive information at higher rates, meaning they need more exposures to adopt a particular contagion.Comment: To appear at ICSWM '1

    The rigidity of choice: lifetime savings under information-processing constraints

    Get PDF
    This paper studies the implications of information-processing limits on the consumption and savings behavior of households through time. It presents a dynamic model in which consumers rationally choose the size and scope of the information they want to process about their fi�nancial possibilities, constrained by a Shannon channel. The model predicts that people with higher degrees of risk aversion rationally choose higher information. This happens for precautionary reasons since, with fi�nite processing rate, risk averse consumers prefer to be well informed about their fi�nancial possibilities before implementing consumption plan. Moreover, numerical results show that consumers with processing capacity constraints have asymmetric responses to shocks, with negative shocks producing more persistent effects than positive ones. This asymmetry results into more savings. I show that the predictions of the model can be effectively used to study the impact of tax reforms on consumers spending. The results are qualitatively consistent with the evidence on tax rebates (2001, 2008).Consumption, Rational Inattention, Dynamic programming

    DECISION MAKING IN THE ERA OF INFOBESITY: A STUDY ON INTERACTION OF GENDER AND PSYCHOLOGICAL TENDENCIES

    Get PDF
    Purpose: This study examines information processing during consumer decision making on online platforms as influenced by gender differences and psychological tendencies. Further exploration is ‘how much information is too much information; leading to infobesity.’ Methodology: The methodology to address the objective included the questionnaires for assessment of psychological tendencies and naturalistic experiments to measure decision making in online conditions. An online marketplace prototype was created for mobile purchase, named ‘mobile bazaar,’ and another for hotel booking, named ‘backpackers.’ The prototype was designed in such a way that the manipulation of information presented to the participant is possible. Participants were recruited with purposive and snowball sampling method depending upon their willingness and familiarity with online market platforms. Final data were collected from Three hundred sixty-eight participants during the period of October 2017- March 2018. The data from questionnaires and the computerized task was scored and analyzed with SPSS version 21 with t-test, chi-square and logistic regression analysis methods. Main findings: The present study shows the influence of psychological tendencies (i.e., need for closure, exploratory tendencies, and uncertainty avoidance) and gender difference in decision making. Female seems to follow ‘process less to process better’ strategy, whereas, men seem to follow ‘process more to get better’ strategy. The findings also provided input to the debate of information measurement in consumer research. Implications: Understanding decision making features of Indian consumers can not only contribute to the understanding of the naturalistic decision-making process itself but also can provide inputs to the market researchers, designers, and policymakers. Novelty /originality of the study: The study was novel in terms of its use of the online marketplace prototype as a naturalistic decision making study method. This method allowed the researchers to examine participants' behavior (of information processing and decision making) in real like scenarios and yet had the luxury of manipulation of presenting information as per research design. Therefore the findings of present study will have more generalizability

    Information Flow Optimization in Augmented Reality Systems for Production & Manufacturing

    Get PDF

    Role of information and its processing in statistical analysis

    Get PDF
    This paper discusses how real-life statistical analysis/inference deviates from ideal environments. More specifically, there often exist models that have equal statistical power as the actual data-generating model, given only limited information and information processing/computation capacity. This means that misspecification actually has two problems: first with misspecification around the model we wish to find, and that an actual data-generating model may never be discovered. Thus the role information - this includes data - plays on statistical inference needs to be considered more heavily than often done. A game defining pseudo-equivalent models is presented in this light. This limited information nature effectively casts a statistical analyst as a decider in decision theory facing an identical problem: trying best to form credence/belief of some events, even if it may end up not being close to objective probability. The sleeping beauty problem is used as a study case to highlight some properties of real-life statistical inference. Bayesian inference of prior updates can lead to wrong credence analysis when prior is assigned to variables/events that are not (statistical identification-wise) identifiable. A controversial idea that Bayesianism can go around identification problems in frequentist analysis is brought to more doubts. This necessitates re-defining how Kolmogorov probability theory is applied in real-life statistical inference, and what concepts need to be fundamental

    Role of information and its processing in statistical analysis

    Get PDF
    This paper discusses how real-life statistical analysis/inference deviates from ideal environments. More specifically, there often exist models that have equal statistical power as the actual data-generating model, given only limited information and information processing/computation capacity. This means that misspecification actually has two problems: first with misspecification around the model we wish to find, and that an actual data-generating model may never be discovered. Thus the role information - this includes data - plays on statistical inference needs to be considered more heavily than often done. A game defining pseudo-equivalent models is presented in this light. This limited information nature effectively casts a statistical analyst as a decider in decision theory facing an identical problem: trying best to form credence/belief of some events, even if it may end up not being close to objective probability. The sleeping beauty problem is used as a study case to highlight some properties of real-life statistical inference. Bayesian inference of prior updates can lead to wrong credence analysis when prior is assigned to variables/events that are not (statistical identification-wise) identifiable. A controversial idea that Bayesianism can go around identification problems in frequentist analysis is brought to more doubts. This necessitates re-defining how Kolmogorov probability theory is applied in real-life statistical inference, and what concepts need to be fundamental

    A Skill-Based Approach to Modeling the Attentional Blink

    Get PDF
    People can often learn new tasks quickly. This is hard to explain with cognitive models because they either need extensive task-specific knowledge or a long training session. In this article, we try to solve this by proposing that task knowledge can be decomposed into skills. A skill is a task-independent set of knowledge that can be reused for different tasks. As a demonstration, we created an attentional blink model from the general skills that we extracted from models of visual attention and working memory. The results suggest that this is a feasible modeling method, which could lead to more generalizable models
    corecore