42 research outputs found

    Social Influence and the Collective Dynamics of Opinion Formation

    Get PDF
    Social influence is the process by which individuals adapt their opinion, revise their beliefs, or change their behavior as a result of social interactions with other people. In our strongly interconnected society, social influence plays a prominent role in many self-organized phenomena such as herding in cultural markets, the spread of ideas and innovations, and the amplification of fears during epidemics. Yet, the mechanisms of opinion formation remain poorly understood, and existing physics-based models lack systematic empirical validation. Here, we report two controlled experiments showing how participants answering factual questions revise their initial judgments after being exposed to the opinion and confidence level of others. Based on the observation of 59 experimental subjects exposed to peer-opinion for 15 different items, we draw an influence map that describes the strength of peer influence during interactions. A simple process model derived from our observations demonstrates how opinions in a group of interacting people can converge or split over repeated interactions. In particular, we identify two major attractors of opinion: (i) the expert effect, induced by the presence of a highly confident individual in the group, and (ii) the majority effect, caused by the presence of a critical mass of laypeople sharing similar opinions. Additional simulations reveal the existence of a tipping point at which one attractor will dominate over the other, driving collective opinion in a given direction. These findings have implications for understanding the mechanisms of public opinion formation and managing conflicting situations in which self-confident and better informed minorities challenge the views of a large uninformed majority.Comment: Published Nov 05, 2013. Open access at: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.007843

    Ranking with social cues: Integrating online review scores and popularity information

    Full text link
    Online marketplaces, search engines, and databases employ aggregated social information to rank their content for users. Two ranking heuristics commonly implemented to order the available options are the average review score and item popularity-that is, the number of users who have experienced an item. These rules, although easy to implement, only partly reflect actual user preferences, as people may assign values to both average scores and popularity and trade off between the two. How do people integrate these two pieces of social information when making choices? We present two experiments in which we asked participants to choose 200 times among options drawn directly from two widely used online venues: Amazon and IMDb. The only information presented to participants was the average score and the number of reviews, which served as a proxy for popularity. We found that most people are willing to settle for items with somewhat lower average scores if they are more popular. Yet, our study uncovered substantial diversity of preferences among participants, which indicates a sizable potential for personalizing ranking schemes that rely on social information.Comment: 4 pages, 3 figures, ICWS

    Multi-attribute utility models as cognitive search engines

    Get PDF
    In optimal stopping problems, decision makers are assumed to search randomly to learn the utility of alternatives; in contrast, in one-shot multi-attribute utility optimization, decision makers are assumed to have perfect knowledge of utilities. We point out that these two contexts represent the boundaries of a continuum, of which the middle remains uncharted: How should people search intelligently when they possess imperfect information about the alternatives? We assume that decision makers first estimate the utility of each available alternative and then search the alternatives in order of their estimated utility until expected benefits are outweighed by search costs. We considered three well-known models for estimating utility: (i) a linear multi-attribute model, (ii) equal weighting of attributes, and (iii) a single-attribute heuristic. We used 12 real-world decision problems, ranging from consumer choice to industrial experimentation, to measure the performance of the three models. The full model (i) performed best on average but its simplifications (ii and iii) also had regions of superior performance. We explain the results by analyzing the impact of the models’ utility order and estimation error

    Pantelis P. Analytis' Quick Files

    No full text
    The Quick Files feature was discontinued and it’s files were migrated into this Project on March 11, 2022. The file URL’s will still resolve properly, and the Quick Files logs are available in the Project’s Recent Activity

    The collective dynamics of sequential search in markets for cultural products

    No full text
    The collective dynamics of sequential search in markets for cultural product

    Multi-attribute utility models as cognitive search engines. Judgment and Decision Making

    No full text
    Abstract In optimal stopping problems, decision makers are assumed to search randomly to learn the utility of alternatives; in contrast, in one-shot multi-attribute utility optimization, decision makers are assumed to have perfect knowledge of utilities. We point out that these two contexts represent the boundaries of a continuum, of which the middle remains uncharted: How should people search intelligently when they possess imperfect information about the alternatives? We assume that decision makers first estimate the utility of each available alternative and then search the alternatives in order of their estimated utility until expected benefits are outweighed by search costs. We considered three well-known models for estimating utility: (i) a linear multi-attribute model, (ii) equal weighting of attributes, and (iii) a single-attribute heuristic. We used 12 real-world decision problems, ranging from consumer choice to industrial experimentation, to measure the performance of the three models. The full model (i) performed best on average but its simplifications (ii and iii) also had regions of superior performance. We explain the results by analyzing the impact of the models' utility order and estimation error

    The Wisdom of Model Crowds

    No full text
    A wide body of empirical research has revealed the descriptive shortcomings of expected value and expected utility models of risky decision making. In response, numerous models have been advanced to predict and explain people’s choices between gambles. Although some of these models have had a great impact in the behavioral, social and management sciences, there is little consensus about which model offers the best account of choice behavior. In this paper, we conduct a large-scale comparison of 58 prominent models of risky choice, using 19 existing behavioral datasets involving more than 800 participants. This allows us to comprehensively evaluate models in terms of individual-level predictive performance across a range of different choice settings. We also identify the psychological mechanisms that lead to superior predictive performance and the properties of choice stimuli that favor certain types of models over others. Second, drawing on research on the wisdom of crowds, we argue that each of the existing models can be seen as an expert that provides unique forecasts in choice predictions. Consistent with this claim, we find that crowds of risky choice models perform better than individual models and thus provide a performance bound for assessing the historical accumulation of knowledge in our field. Our results suggest that each model captures unique aspects of the decision process, and that existing risky choice models offer complementary rather than competing accounts of behavior. We discuss the implications of our results on theories of risky decision making and the quantitative modeling of choice behavior
    corecore