17 research outputs found
Understanding Inequalities in Ride-Hailing Services Through Simulations
Despite the potential of online sharing economy platforms such as Uber, Lyft,
or Foodora to democratize the labor market, these services are often accused of
fostering unfair working conditions and low wages. These problems have been
recognized by researchers and regulators but the size and complexity of these
socio-technical systems, combined with the lack of transparency about
algorithmic practices, makes it difficult to understand system dynamics and
large-scale behavior. This paper combines approaches from complex systems and
algorithmic fairness to investigate the effect of algorithm design decisions on
wage inequality in ride-hailing markets. We first present a computational model
that includes conditions about locations of drivers and passengers, traffic,
the layout of the city, and the algorithm that matches requests with drivers.
We calibrate the model with parameters derived from empirical data. Our
simulations show that small changes in the system parameters can cause large
deviations in the income distributions of drivers, leading to a highly
unpredictable system which often distributes vastly different incomes to
identically performing drivers. As suggested by recent studies about feedback
loops in algorithmic systems, these initial income differences can result in
enforced and long-term wage gaps.Comment: Code for the simulation can be found at https://github.com/bokae/tax
Enforcing Group Fairness in Algorithmic Decision Making: Utility Maximization Under Sufficiency
Binary decision making classifiers are not fair by default. Fairness requirements are an additional element to the decision making rationale, which is typically driven by maximizing some utility function. In that sense, algorithmic fairness can be formulated as a constrained optimization problem. This paper contributes to the discussion on how to implement fairness, focusing on the fairness concepts of positive predictive value (PPV) parity, false omission rate (FOR) parity, and sufficiency (which combines the former two).
We show that group-specific threshold rules are optimal for PPV parity and FOR parity, similar to well-known results for other group fairness criteria. However, depending on the underlying population distributions and the utility function, we find that sometimes an upper-bound threshold rule for one group is optimal: utility maximization under PPV parity (or FOR parity) might thus lead to selecting the individuals with the smallest utility for one group, instead of selecting the most promising individuals. This result is counter-intuitive and in contrast to the analogous solutions for statistical parity and equality of opportunity.
We also provide a solution for the optimal decision rules satisfying the fairness constraint sufficiency. We show that more complex decision rules are required and that this leads to within-group unfairness for all but one of the groups. We illustrate our findings based on simulated and real data
Enforcing group fairness in algorithmic decision making : utility maximization under sufficiency
Binary decision making classifiers are not fair by default. Fairness requirements are an additional element to the decision making rationale, which is typically driven by maximizing some utility function. In that sense, algorithmic fairness can be formulated as a constrained optimization problem. This paper contributes to the discussion on how to implement fairness, focusing on the fairness concepts of positive predictive value (PPV) parity, false omission rate (FOR) parity, and sufficiency (which combines the former two).
We show that group-specific threshold rules are optimal for PPV parity and FOR parity, similar to well-known results for other group fairness criteria. However, depending on the underlying population distributions and the utility function, we find that sometimes an upper-bound threshold rule for one group is optimal: utility maximization under PPV parity (or FOR parity) might thus lead to selecting the individuals with the smallest utility for one group, instead of selecting the most promising individuals. This result is counter-intuitive and in contrast to the analogous solutions for statistical parity and equality of opportunity.
We also provide a solution for the optimal decision rules satisfying the fairness constraint sufficiency. We show that more complex decision rules are required and that this leads to within-group unfairness for all but one of the groups. We illustrate our findings based on simulated and real data
Global Connections and the Structure of Skills in Local Co-Worker Networks = Globális kapcsolatok, munkatársi kapcsolathálók és a dolgozók készségei
The Warnier method, a highly prescriptive program design approach for file-oriented solutions, has been criticized for its lack of a database design component. This paper addresses this weakness by incorporating a logical database design step in Warnier method. Specifically, the paper presents rules for transforming the information in a Warnier diagram into a set of relations. With this extension the Wamier method complements the entity-relationship approach for data analysis and logical database design
Global Connections and the Structure of Skills in Local Co-Worker Networks = Globális kapcsolatok, munkatársi kapcsolathálók és a dolgozók készségei
News recommender systems: a programmatic research review
News recommender systems (NRS) are becoming a ubiquitous part of the digital media landscape. Particularly in the realm of political news, the adoption of NRS can significantly impact journalistic distribution, in turn affecting journalistic work practices and news consumption. Thus, NRS touch both the supply and demand of political news. In recent years, there has been a strong increase in research on NRS. Yet, the field remains dispersed across supply and demand research perspectives. Therefore, the contribution of this programmatic research review is threefold. First, we conduct a scoping study to review scholarly work on the journalistic supply and user demand sides. Second, we identify underexplored areas. Finally, we advance five recommendations for future research from a political communication perspective
User Attitudes to Content Moderation in Web Search
Internet users highly rely on and trust web search engines, such as Google, to find relevant information online. However, scholars have documented numerous biases and inaccuracies in search outputs. To improve the quality of search results, search engines employ various content moderation practices such as interface elements informing users about potentially dangerous websites and algorithmic mechanisms for downgrading or removing low-quality search results. While the reliance of the public on web search engines and their use of moderation practices is well-established, user attitudes towards these practices have not yet been explored in detail. To address this gap, we first conducted an overview of content moderation practices used by search engines, and then surveyed a representative sample of the US adult population (N=398) to examine the levels of support for different moderation practices applied to potentially misleading and/or potentially offensive content in web search. We also analyzed the relationship between user characteristics and their support for specific moderation practices. We find that the most supported practice is informing users about potentially misleading or offensive content, and the least supported one is the complete removal of search results. More conservative users and users with lower levels of trust in web search results are more likely to be against content moderation in web search
Comparing the Language of QAnon-Related Content on Parler, Gab, and Twitter
Parler, a “free speech” platform popular among conservatives, was taken offline in January 2021 due to the lack of moderation of harmful content. While other popular social media platforms were also used to spread conspiratorial, hateful and threatening content, Parler suffered the most consequences in the aftermath of the 2020 US presidential elections, having been singled out in the news coverage. Through a comparative study, we identify differences in content using #QAnon across three social media platforms, Parler, Twitter, and Gab, focusing on the volume, the amount of anti-social language, and the context of QAnon-related content over a month-long period. While the number of posts is the highest on Parler, this could be attributed to the differences in the use of hashtags on the platforms, which has consequences for other analyses. In our analysis, Parler exhibits the highest levels of anti-social language, while Gab has the highest proportion of #QAnon posts with hate terms. To get at qualitative differences in the posts, we perform analysis of named entities and narratives, focusing on differences in the focus of conversations and the levels of anti-social language of posts mentioning different groups of political figures
The politicization of medical preprints on Twitter during the early stages of COVID-19 pandemic
We examine the patterns of medical preprint sharing on Twitter during the early stages of the COVID-19 pandemic. Our analysis demonstrates a stark increase in attention to medical preprints among the general public since the beginning of the pandemic. We also observe a political divide in medical preprint sharing patterns - a finding in line with previous observations regarding the politicisation of COVID-19-related discussions. In addition, we find that the increase in attention to preprints from the members of the general public coincided with the change in the social media-based discourse around preprints