134 research outputs found

    Human cooperation in groups: variation begets variation

    Get PDF
    Many experiments on human cooperation have revealed that individuals differ systematically in their tendency to cooperate with others. It has also been shown that individuals condition their behaviour on the overall cooperation level of their peers. Yet, little is known about how individuals respond to heterogeneity in cooperativeness in their neighbourhood. Here, we present an experimental study investigating whether and how people respond to heterogeneous behaviour in a public goods game. We find that a large majority of subjects does respond to heterogeneity in their group, but they respond in quite different ways. Most subjects contribute less to the public good when the contributions of their peers are more heterogeneous, but a substantial fraction of individuals consistently contributes more in this case. In addition, we find that individuals that respond positively to heterogeneity have a higher general cooperation tendency. The finding that social responsiveness occurs in different forms and is correlated with cooperativeness may have important implications for the outcome of cooperative interactions

    Many analysts, one data set: making transparent how variations in analytic choices affect results

    Get PDF
    Twenty-nine teams involving 61 analysts used the same data set to address the same research question: whether soccer referees are more likely to give red cards to dark-skin-toned players than to light-skin-toned players. Analytic approaches varied widely across the teams, and the estimated effect sizes ranged from 0.89 to 2.93 (Mdn = 1.31) in odds-ratio units. Twenty teams (69%) found a statistically significant positive effect, and 9 teams (31%) did not observe a significant relationship. Overall, the 29 different analyses used 21 unique combinations of covariates. Neither analysts’ prior beliefs about the effect of interest nor their level of expertise readily explained the variation in the outcomes of the analyses. Peer ratings of the quality of the analyses also did not account for the variability. These findings suggest that significant variation in the results of analyses of complex data may be difficult to avoid, even by experts with honest intentions. Crowdsourcing data analysis, a strategy in which numerous research teams are recruited to simultaneously investigate the same research question, makes transparent how defensible, yet subjective, analytic choices influence research results

    Modelling stereotyping in cooperation systems

    Get PDF
    Cooperation is a sophisticated example of collective intelligence. This is particularly the case for indirect reciprocity, where benefit is provided to others without a guarantee of a future return. This is becoming increasingly relevant to future technology, where autonomous machines face cooperative dilemmas. This paper addresses the problem of stereotyping, where traits belonging to an individual are used as proxy when assessing their reputation. This is a cognitive heuristic that humans frequently use to avoid deliberation, but can lead to negative societal implications such as discrimination. It is feasible that machines could be equally susceptible. Our contribution concerns a new and general framework to examine how stereotyping affects the reputation of agents engaging in indirect reciprocity. The framework is flexible and focuses on how reputations are shared. This offers the opportunity to assess the interplay between the sharing of traits and the cost, in terms of reduced cooperation, through opportunities for shirkers to benefit. This is demonstrated using a number of key scenarios. In particular, the results show that cooperation is sensitive to the structure of reputation sharing between individuals

    The effect of autonomy, training opportunities, age and salaries on job satisfaction in the South East Asian retail petroleum industry

    Get PDF
    South East Asian petroleum retailers are under considerable pressure to improve service quality by reducing turnover. An empirical methodology from this industry determined the extent to which job characteristics, training opportunities, age and salary influenced the level of job satisfaction, an indicator of turnover. Responses are reported on a random sample of 165 site employees (a 68% response rate) of a Singaporean retail petroleum firm. A restricted multivariate regression model of autonomy and training opportunities explained the majority (35.4%) of the variability of job satisfaction. Age did not moderate these relationships, except for employees >21 years of age, who reported enhanced job satisfaction with additional salary. Human Capital theory, Life Cycle theory and Job Enrichment theory are invoked and explored in the context of these findings in the South East Asian retail petroleum industry. In the South East Asian retail petroleum industry, jobs providing employees with the opportunity to undertake a variety of tasks that enhanced the experienced meaningfulness of work are likely to promote job satisfaction, reduce turnover and increase the quality of service

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency–Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe
    corecore