7 research outputs found
Recommended from our members
Does informational independence always matter? Children believe small group discussion is more accurate than ten times as many independent informants
Learners faced with competing statements that each have
support from multiple sources must decide whom to trust.
Lacking firsthand knowledge, they frequently trust the
majority. Yet, majorities can be misleading if most members
are relying on hearsay from just a few members with
firsthand knowledge. Thus, past work has emphasized the
importance of informational independence when deciding
whom to trust, showing that children and adults do consider
informational independence important in certain contexts.
However, because informational independence precludes
group deliberation, we ask whether children make the reverse
inference and devalue informational independence when
facing a problem that could benefit from deliberation. In two
studies, children and adults ignore informational
independence when attempting to answer abstract reasoning
questions. However, for a question type for which
deliberative reasoning would be of doubtful benefit, children
and adults seek advice from multiple independent sources
rather than a deliberative group
Harvesting Wisdom on Social Media for Business Decision Making
The proliferation of social media provides significant opportunities for organizations to obtain wisdom of the crowds (WOC)-type data for decision making. However, critical challenges associated with collecting such data exist. For example, the openness of social media tends to increase the possibility of social influence, which may diminish group diversity, one of the conditions of WOC. In this research-in-progress paper, a new social media data analytics framework is proposed. It is equipped with well-designed mechanisms (e.g., using different discussion processes to overcome social influence issues and boost social learning) to generate data and employs state-of-the-art big data technologies, e.g., Amazon EMR, for data processing and storage. Design science research methodology is used to develop the framework. This paper contributes to the WOC and social media adoption literature by providing a practical approach for organizations to effectively generate WOC-type data from social media to support their decision making
Rescuing Collective Wisdom when the Average Group Opinion Is Wrong
The total knowledge contained within a collective supersedes the knowledge of even its most intelligent member. Yet the collective knowledge will remain inaccessible to us unless we are able to find efficient knowledge aggregation methods that produce reliable decisions based on the behavior or opinions of the collective’s members. It is often stated that simple averaging of a pool of opinions is a good and in many cases the optimal way to extract knowledge from a crowd. The method of averaging has been applied to analysis of decision-making in very different fields, such as forecasting, collective animal behavior, individual psychology, and machine learning. Two mathematical theorems, Condorcet’s theorem and Jensen’s inequality, provide a general theoretical justification for the averaging procedure. Yet the necessary conditions which guarantee the applicability of these theorems are often not met in practice. Under such circumstances, averaging can lead to suboptimal and sometimes very poor performance. Practitioners in many different fields have independently developed procedures to counteract the failures of averaging. We review such knowledge aggregation procedures and interpret the methods in the light of a statistical decision theory framework to explain when their application is justified. Our analysis indicates that in the ideal case, there should be a matching between the aggregation procedure and the nature of the knowledge distribution, correlations, and associated error costs. This leads us to explore how machine learning techniques can be used to extract near-optimal decision rules in a data-driven manner. We end with a discussion of open frontiers in the domain of knowledge aggregation and collective intelligence in general
Adolescents show collective intelligence which can be driven by a geometric mean rule of thumb
How effective groups are in making decisions is a long-standing question in studying human and animal behaviour. Despite the limited social and cognitive abilities of younger people, skills which are often required for collective intelligence, studies of group performance have been limited to adults. Using a simple task of estimating the number of sweets in jars, we show in two experiments that adolescents at least as young as 11 years old improve their estimation accuracy after a period of group discussion, demonstrating collective intelligence. Although this effect was robust to the overall distribution of initial estimates, when the task generated positively skewed estimates, the geometric mean of initial estimates gave the best fit to the data compared to other tested aggregation rules. A geometric mean heuristic in consensus decision making is also likely to apply to adults, as it provides a robust and well-performing rule for aggregating different opinions. The geometric mean rule is likely to be based on an intuitive logarithmic-like number representation, and our study suggests that this mental number scaling may be beneficial in collective decisions
Crowd wisdom enhanced by costly signaling in a virtual rating system
Costly signaling theory was developed in both economics and biology and has been used to explain a wide range of phenomena. However, the theory’s prediction that signal cost can enforce information quality in the design of new communication systems has never been put to an empirical test. Here we show that imposing time costs on reporting extreme scores can improve crowd wisdom in a previously cost-free rating system. We developed an online game where individuals interacted repeatedly with simulated services and rated them for satisfaction. We associated ratings with differential time costs by endowing the graphical user interface that solicited ratings from the users with “physics,” including an initial (default) slider position and friction. When ratings were not associated with differential cost (all scores from 0 to 100 could be given by an equally low-cost click on the screen), scores correlated only weakly with objective service quality. However, introducing differential time costs, proportional to the deviation from the mean score, improved correlations between subjective rating scores and objective service performance and lowered the sample size required for obtain ingreliable, averaged crowd estimates. Boosting time costs for reporting extreme scores further facilitated the detection of top performances. Thus, human collective online behavior, which is typically cost-free, can be made more informative by applying costly signaling via the virtual physics of rating devices