156 research outputs found

    The Evolution of Facultative Conformity Based on Similarity

    Get PDF
    Conformist social learning can have a pronounced impact on the cultural evolution of human societies, and it can shape both the genetic and cultural evolution of human social behavior more broadly. Conformist social learning is beneficial when the social learner and the demonstrators from whom she learns are similar in the sense that the same behavior is optimal for both. Otherwise, the social learner's optimum is likely to be rare among demonstrators, and conformity is costly. The trade-off between these two situations has figured prominently in the longstanding debate about the evolution of conformity, but the importance of the trade-off can depend critically on the flexibility of one's social learning strategy. We developed a gene-culture coevolutionary model that allows cognition to encode and process information about the similarity between naive learners and experienced demonstrators. Facultative social learning strategies that condition on perceived similarity evolve under certain circumstances. When this happens, facultative adjustments are often asymmetric. Asymmetric adjustments mean that the tendency to follow the majority when learners perceive demonstrators as similar is stronger than the tendency to follow the minority when learners perceive demonstrators as different. In an associated incentivized experiment, we found that social learners adjusted how they used social information based on perceived similarity, but adjustments were symmetric. The symmetry of adjustments completely eliminated the commonly assumed trade-off between cases in which learners and demonstrators share an optimum versus cases in which they do not. In a second experiment that maximized the potential for social learners to follow their preferred strategies, a few social learners exhibited an inclination to follow the majority. Most, however, did not respond systematically to social information. Additionally, in the complete absence of information about their similarity to demonstrators, social learners were unwilling to make assumptions about whether they shared an optimum with demonstrators. Instead, social learners simply ignored social information even though this was the only information available. Our results suggest that social cognition equips people to use conformity in a discriminating fashion that moderates the evolutionary trade-offs that would occur if conformist social learning was rigidly applied

    Human cooperation in groups: variation begets variation

    Get PDF
    Many experiments on human cooperation have revealed that individuals differ systematically in their tendency to cooperate with others. It has also been shown that individuals condition their behaviour on the overall cooperation level of their peers. Yet, little is known about how individuals respond to heterogeneity in cooperativeness in their neighbourhood. Here, we present an experimental study investigating whether and how people respond to heterogeneous behaviour in a public goods game. We find that a large majority of subjects does respond to heterogeneity in their group, but they respond in quite different ways. Most subjects contribute less to the public good when the contributions of their peers are more heterogeneous, but a substantial fraction of individuals consistently contributes more in this case. In addition, we find that individuals that respond positively to heterogeneity have a higher general cooperation tendency. The finding that social responsiveness occurs in different forms and is correlated with cooperativeness may have important implications for the outcome of cooperative interactions

    Conducting interactive experiments online

    Get PDF
    Online labor markets provide new opportunities for behavioral research, but conducting economic experiments online raises important methodological challenges. This particularly holds for interactive designs. In this paper, we provide a methodological discussion of the similarities and differences between interactive experiments conducted in the laboratory and online. To this end, we conduct a repeated public goods experiment with and without punishment using samples from the laboratory and the online platform Amazon Mechanical Turk. We chose to replicate this experiment because it is long and logistically complex. It therefore provides a good case study for discussing the methodological and practical challenges of online interactive experimentation. We find that basic behavioral patterns of cooperation and punishment in the laboratory are replicable online. The most important challenge of online interactive experiments is participant dropout. We discuss measures for reducing dropout and show that, for our case study, dropouts are exogenous to the experiment. We conclude that data quality for interactive experiments via the Internet is adequate and reliable, making online interactive experimentation a potentially valuable complement to laboratory studies

    Many analysts, one data set: making transparent how variations in analytic choices affect results

    Get PDF
    Twenty-nine teams involving 61 analysts used the same data set to address the same research question: whether soccer referees are more likely to give red cards to dark-skin-toned players than to light-skin-toned players. Analytic approaches varied widely across the teams, and the estimated effect sizes ranged from 0.89 to 2.93 (Mdn = 1.31) in odds-ratio units. Twenty teams (69%) found a statistically significant positive effect, and 9 teams (31%) did not observe a significant relationship. Overall, the 29 different analyses used 21 unique combinations of covariates. Neither analysts’ prior beliefs about the effect of interest nor their level of expertise readily explained the variation in the outcomes of the analyses. Peer ratings of the quality of the analyses also did not account for the variability. These findings suggest that significant variation in the results of analyses of complex data may be difficult to avoid, even by experts with honest intentions. Crowdsourcing data analysis, a strategy in which numerous research teams are recruited to simultaneously investigate the same research question, makes transparent how defensible, yet subjective, analytic choices influence research results

    Human Resource Flexibility as a Mediating Variable Between High Performance Work Systems and Performance

    Get PDF
    Much of the human resource management literature has demonstrated the impact of high performance work systems (HPWS) on organizational performance. A new generation of studies is emerging in this literature that recommends the inclusion of mediating variables between HPWS and organizational performance. The increasing rate of dynamism in competitive environments suggests that measures of employee adaptability should be included as a mechanism that may explain the relevance of HPWS to firm competitiveness. On a sample of 226 Spanish firms, the study’s results confirm that HPWS influences performance through its impact on the firm’s human resource (HR) flexibility

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency–Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research
    corecore