25 research outputs found

    Gender Differences in Participation and Reward on Stack Overflow

    Get PDF
    Programming is a valuable skill in the labor market, making the underrepresentation of women in computing an increasingly important issue. Online question and answer platforms serve a dual purpose in this field: they form a body of knowledge useful as a reference and learning tool, and they provide opportunities for individuals to demonstrate credible, verifiable expertise. Issues, such as male-oriented site design or overrepresentation of men among the site's elite may therefore compound the issue of women's underrepresentation in IT. In this paper we audit the differences in behavior and outcomes between men and women on Stack Overflow, the most popular of these Q&A sites. We observe significant differences in how men and women participate in the platform and how successful they are. For example, the average woman has roughly half of the reputation points, the primary measure of success on the site, of the average man. Using an Oaxaca-Blinder decomposition, an econometric technique commonly applied to analyze differences in wages between groups, we find that most of the gap in success between men and women can be explained by differences in their activity on the site and differences in how these activities are rewarded. Specifically, 1) men give more answers than women and 2) are rewarded more for their answers on average, even when controlling for possible confounders such as tenure or buy-in to the site. Women ask more questions and gain more reward per question. We conclude with a hypothetical redesign of the site's scoring system based on these behavioral differences, cutting the reputation gap in half

    An Agent-based Model to Evaluate Interventions on Online Dating Platforms to Decrease Racial Homogamy

    Full text link
    Perhaps the most controversial questions in the study of online platforms today surround the extent to which platforms can intervene to reduce the societal ills perpetrated on them. Up for debate is whether there exist any effective and lasting interventions a platform can adopt to address, e.g., online bullying, or if other, more far-reaching change is necessary to address such problems. Empirical work is critical to addressing such questions. But it is also challenging, because it is time-consuming, expensive, and sometimes limited to the questions companies are willing to ask. To help focus and inform this empirical work, we here propose an agent-based modeling (ABM) approach. As an application, we analyze the impact of a set of interventions on a simulated online dating platform on the lack of long-term interracial relationships in an artificial society. In the real world, a lack of interracial relationships are a critical vehicle through which inequality is maintained. Our work shows that many previously hypothesized interventions online dating platforms could take to increase the number of interracial relationships from their website have limited effects, and that the effectiveness of any intervention is subject to assumptions about sociocultural structure. Further, interventions that are effective in increasing diversity in long-term relationships are at odds with platforms' profit-oriented goals. At a general level, the present work shows the value of using an ABM approach to help understand the potential effects and side effects of different interventions that a platform could take

    A qualitative, network-centric method for modeling socio-technical systems, with applications to evaluating interventions on social media platforms to increase social equality

    Get PDF
    We propose and extend a qualitative, complex systems methodology from cognitive engineering, known as theabstraction hierarchy, to model how potential interventions that could be carried out by social media platforms might impact social equality. Social media platforms have come under considerable ire for their role in perpetuating social inequality. However, there is also significant evidence that platforms can play a role inreducingsocial inequality, e.g. through the promotion of social movements. Platforms’ role in producing or reducing social inequality is, moreover, not static; platforms can and often do take actions targeted at positive change. How can we develop tools to help us determine whether or not a potential platform change might actually work to increase social equality? Here, we present the abstraction hierarchy as a tool to help answer this question. Our primary contributions are two-fold. First, methodologically, we extend existing research on the abstraction hierarchy in cognitive engineering with principles from Network Science. Second, substantively, we illustrate the utility of this approach by using it to assess the potential effectiveness of a set of interventions, proposed in prior work, for how online dating websites can help mitigate social inequality

    Exploring Author Gender in Book Rating and Recommendation

    Get PDF
    Collaborative filtering algorithms find useful patterns in rating and consumption data and exploit these patterns to guide users to good items. Many of the patterns in rating datasets reflect important real-world differences between the various users and items in the data; other patterns may be irrelevant or possibly undesirable for social or ethical reasons, particularly if they reflect undesired discrimination, such as gender or ethnic discrimination in publishing. In this work, we examine the response of collaborative filtering recommender algorithms to the distribution of their input data with respect to a dimension of social concern, namely content creator gender. Using publicly-available book ratings data, we measure the distribution of the genders of the authors of books in user rating profiles and recommendation lists produced from this data. We find that common collaborative filtering algorithms differ in the gender distribution of their recommendation lists, and in the relationship of that output distribution to user profile distribution

    Mapping the Field of Algorithm Auditing: A Systematic Literature Review Identifying Research Trends, Linguistic and Geographical Disparities

    Get PDF
    The increasing reliance on complex algorithmic systems by online platforms has sparked a growing need for algorithm auditing, a research methodology evaluating these systems' functionality and societal impact. In this paper, we systematically review algorithm auditing studies and identify trends in their methodological approaches, the geographic distribution of authors, and the selection of platforms, languages, geographies, and group-based attributes in the focus of auditing research. We present evidence of a significant skew of research focus toward Western contexts, particularly the US, and a disproportionate reliance on English language data. Additionally, our analysis indicates a tendency in algorithm auditing studies to focus on a narrow set of group-based attributes, often operationalized in simplified ways, which might obscure more nuanced aspects of algorithmic bias and discrimination. By conducting this review, we aim to provide a clearer understanding of the current state of the algorithm auditing field and identify gaps that need to be addressed for a more inclusive and representative research landscape

    The role of luck in the success of social media influencers

    Get PDF
    Motivation Social media platforms centered around content creators (CCs) faced rapid growth in the past decade. Currently, millions of CCs make livable incomes through platforms such as YouTube, TikTok, and Instagram. As such, similarly to the job market, it is important to ensure the success and income (usually related to the follower counts) of CCs reflect the quality of their work. Since quality cannot be observed directly, two other factors govern the network-formation process: (a) the visibility of CCs (resulted from, e.g., recommender systems and moderation processes) and (b) the decision-making process of seekers (i.e., of users focused on finding CCs). Prior virtual experiments and empirical work seem contradictory regarding fairness: While the first suggests the expected number of followers of CCs reflects their quality, the second says that quality does not perfectly predict success. Results Our paper extends prior models in order to bridge this gap between theoretical and empirical work. We (a) define a parameterized recommendation process which allocates visibility based on popularity biases, (b) define two metrics of individual fairness (ex-ante and ex-post), and (c) define a metric for seeker satisfaction. Through an analytical approach we show our process is an absorbing Markov Chain where exploring only the most popular CCs leads to lower expected times to absorption but higher chances of unfairness for CCs. While increasing the exploration helps, doing so only guarantees fair outcomes for the highest (and lowest) quality CC. Simulations revealed that CCs and seekers prefer different algorithmic designs: CCs generally have higher chances of fairness with anti-popularity biased recommendation processes, while seekers are more satisfied with popularity-biased recommendations. Altogether, our results suggest that while the exploration of low-popularity CCs is needed to improve fairness, platforms might not have the incentive to do so and such interventions do not entirely prevent unfair outcomes

    Individual Fairness for Social Media Influencers

    Full text link
    Nowadays, many social media platforms are centered around content creators (CC). On these platforms, the tie formation process depends on two factors: (a) the exposure of users to CCs (decided by, e.g., a recommender system), and (b) the following decision-making process of users. Recent research studies underlined the importance of content quality by showing that under exploratory recommendation strategies, the network eventually converges to a state where the higher the quality of the CC, the higher their expected number of followers. In this paper, we extend prior work by (a) looking beyond averages to assess the fairness of the process and (b) investigating the importance of exploratory recommendations for achieving fair outcomes. Using an analytical approach, we show that non-exploratory recommendations converge fast but usually lead to unfair outcomes. Moreover, even with exploration, we are only guaranteed fair outcomes for the highest (and lowest) quality CCs
    corecore