1,074 research outputs found
Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search
We present a framework for quantifying and mitigating algorithmic bias in
mechanisms designed for ranking individuals, typically used as part of
web-scale search and recommendation systems. We first propose complementary
measures to quantify bias with respect to protected attributes such as gender
and age. We then present algorithms for computing fairness-aware re-ranking of
results. For a given search or recommendation task, our algorithms seek to
achieve a desired distribution of top ranked results with respect to one or
more protected attributes. We show that such a framework can be tailored to
achieve fairness criteria such as equality of opportunity and demographic
parity depending on the choice of the desired distribution. We evaluate the
proposed algorithms via extensive simulations over different parameter choices,
and study the effect of fairness-aware ranking on both bias and utility
measures. We finally present the online A/B testing results from applying our
framework towards representative ranking in LinkedIn Talent Search, and discuss
the lessons learned in practice. Our approach resulted in tremendous
improvement in the fairness metrics (nearly three fold increase in the number
of search queries with representative results) without affecting the business
metrics, which paved the way for deployment to 100% of LinkedIn Recruiter users
worldwide. Ours is the first large-scale deployed framework for ensuring
fairness in the hiring domain, with the potential positive impact for more than
630M LinkedIn members.Comment: This paper has been accepted for publication at ACM KDD 201
A Comprehensive Survey of Artificial Intelligence Techniques for Talent Analytics
In today's competitive and fast-evolving business environment, it is a
critical time for organizations to rethink how to make talent-related decisions
in a quantitative manner. Indeed, the recent development of Big Data and
Artificial Intelligence (AI) techniques have revolutionized human resource
management. The availability of large-scale talent and management-related data
provides unparalleled opportunities for business leaders to comprehend
organizational behaviors and gain tangible knowledge from a data science
perspective, which in turn delivers intelligence for real-time decision-making
and effective talent management at work for their organizations. In the last
decade, talent analytics has emerged as a promising field in applied data
science for human resource management, garnering significant attention from AI
communities and inspiring numerous research efforts. To this end, we present an
up-to-date and comprehensive survey on AI technologies used for talent
analytics in the field of human resource management. Specifically, we first
provide the background knowledge of talent analytics and categorize various
pertinent data. Subsequently, we offer a comprehensive taxonomy of relevant
research efforts, categorized based on three distinct application-driven
scenarios: talent management, organization management, and labor market
analysis. In conclusion, we summarize the open challenges and potential
prospects for future research directions in the domain of AI-driven talent
analytics.Comment: 30 pages, 15 figure
Disentangling and Operationalizing AI Fairness at LinkedIn
Operationalizing AI fairness at LinkedIn's scale is challenging not only
because there are multiple mutually incompatible definitions of fairness but
also because determining what is fair depends on the specifics and context of
the product where AI is deployed. Moreover, AI practitioners need clarity on
what fairness expectations need to be addressed at the AI level. In this paper,
we present the evolving AI fairness framework used at LinkedIn to address these
three challenges. The framework disentangles AI fairness by separating out
equal treatment and equitable product expectations. Rather than imposing a
trade-off between these two commonly opposing interpretations of fairness, the
framework provides clear guidelines for operationalizing equal AI treatment
complemented with a product equity strategy. This paper focuses on the equal AI
treatment component of LinkedIn's AI fairness framework, shares the principles
that support it, and illustrates their application through a case study. We
hope this paper will encourage other big tech companies to join us in sharing
their approach to operationalizing AI fairness at scale, so that together we
can keep advancing this constantly evolving field
The Multisided Complexity of Fairness in Recommender Systems
Recommender systems are poised at the interface between stakeholders: for example, job applicants and employers in the case of recommendations of employment listings, or artists and listeners in the case of music recommendation. In such multisided platforms, recommender systems play a key role in enabling discovery of products and information at large scales. However, as they have become more and more pervasive in society, the equitable distribution of their benefits and harms have been increasingly under scrutiny, as is the case with machine learning generally. While recommender systems can exhibit many of the biases encountered in other machine learning settings, the intersection of personalization and multisidedness makes the question of fairness in recommender systems manifest itself quite differently. In this article, we discuss recent work in the area of multisided fairness in recommendation, starting with a brief introduction to core ideas in algorithmic fairness and multistakeholder recommendation. We describe techniques for measuring fairness and algorithmic approaches for enhancing fairness in recommendation outputs. We also discuss feedback and popularity effects that can lead to unfair recommendation outcomes. Finally, we introduce several promising directions for future research in this area
A Survey on Fairness-aware Recommender Systems
As information filtering services, recommender systems have extremely
enriched our daily life by providing personalized suggestions and facilitating
people in decision-making, which makes them vital and indispensable to human
society in the information era. However, as people become more dependent on
them, recent studies show that recommender systems potentially own
unintentional impacts on society and individuals because of their unfairness
(e.g., gender discrimination in job recommendations). To develop trustworthy
services, it is crucial to devise fairness-aware recommender systems that can
mitigate these bias issues. In this survey, we summarise existing methodologies
and practices of fairness in recommender systems. Firstly, we present concepts
of fairness in different recommendation scenarios, comprehensively categorize
current advances, and introduce typical methods to promote fairness in
different stages of recommender systems. Next, after introducing datasets and
evaluation metrics applied to assess the fairness of recommender systems, we
will delve into the significant influence that fairness-aware recommender
systems exert on real-world industrial applications. Subsequently, we highlight
the connection between fairness and other principles of trustworthy recommender
systems, aiming to consider trustworthiness principles holistically while
advocating for fairness. Finally, we summarize this review, spotlighting
promising opportunities in comprehending concepts, frameworks, the balance
between accuracy and fairness, and the ties with trustworthiness, with the
ultimate goal of fostering the development of fairness-aware recommender
systems.Comment: 27 pages, 9 figure
Fairness and Bias in Algorithmic Hiring
Employers are adopting algorithmic hiring technology throughout the
recruitment pipeline. Algorithmic fairness is especially applicable in this
domain due to its high stakes and structural inequalities. Unfortunately, most
work in this space provides partial treatment, often constrained by two
competing narratives, optimistically focused on replacing biased recruiter
decisions or pessimistically pointing to the automation of discrimination.
Whether, and more importantly what types of, algorithmic hiring can be less
biased and more beneficial to society than low-tech alternatives currently
remains unanswered, to the detriment of trustworthiness. This multidisciplinary
survey caters to practitioners and researchers with a balanced and integrated
coverage of systems, biases, measures, mitigation strategies, datasets, and
legal aspects of algorithmic hiring and fairness. Our work supports a
contextualized understanding and governance of this technology by highlighting
current opportunities and limitations, providing recommendations for future
work to ensure shared benefits for all stakeholders
- …