71 research outputs found
Essays In Algorithmic Market Design Under Social Constraints
Rapid technological advances over the past few decades---in particular, the rise of the internet---has significantly reshaped and expanded the meaning of our everyday social activities, including our interactions with our social circle, the media, and our political and economic activities
This dissertation aims to tackle some of the unique societal challenges underlying the design of automated online platforms that interact with people and organizations---namely, those imposed by legal, ethical, and strategic considerations.
I narrow down attention to fairness considerations, learning with repeated trials, and competition for market share. In each case, I investigate the broad issue in a particular context (i.e. online market), and present the solution my research offers to the problem in that application.
Addressing interdisciplinary problems, such as the ones in this dissertation, requires drawing ideas and techniques from various disciplines, including theoretical computer science, microeconomics, and applied statistics.
The research presented here utilizes a combination of theoretical and data analysis tools to shed light on some of the key challenges in designing algorithms for today\u27s online markets, including crowdsourcing and labor markets, online advertising, and social networks among others
Assessing AI Impact Assessments: A Classroom Study
Artificial Intelligence Impact Assessments ("AIIAs"), a family of tools that
provide structured processes to imagine the possible impacts of a proposed AI
system, have become an increasingly popular proposal to govern AI systems.
Recent efforts from government or private-sector organizations have proposed
many diverse instantiations of AIIAs, which take a variety of forms ranging
from open-ended questionnaires to graded score-cards. However, to date that has
been limited evaluation of existing AIIA instruments. We conduct a classroom
study (N = 38) at a large research-intensive university (R1) in an elective
course focused on the societal and ethical implications of AI. We assign
students to different organizational roles (for example, an ML scientist or
product manager) and ask participant teams to complete one of three existing AI
impact assessments for one of two imagined generative AI systems. In our
thematic analysis of participants' responses to pre- and post-activity
questionnaires, we find preliminary evidence that impact assessments can
influence participants' perceptions of the potential risks of generative AI
systems, and the level of responsibility held by AI experts in addressing
potential harm. We also discover a consistent set of limitations shared by
several existing AIIA instruments, which we group into concerns about their
format and content, as well as the feasibility and effectiveness of the
activity in foreseeing and mitigating potential harms. Drawing on the findings
of this study, we provide recommendations for future work on developing and
validating AIIAs.Comment: 9 pages, 4 figures, to appear in the NeurIPS 2023 Regulatable ML
Worksho
SSVEP Extraction Applying Wavelet Transform and Decision Tree With Bays Classification
Background: SSVEP signals are usable in BCI systems (Brain-Computer interface) in order to make the paralysis movement more comfortable via his Wheelchair.Methods: In this study, we extracted The SSVEP from EEG signals, next we attained the features from it then we ranked them to obtain the best features among all feature and at the end we applied the selected features to classify them. We want to show the degree of accuracy we applied in this work.Results: In this study Bayes (applied for classifying of selected features) got the highest level of accuracy (83.32%) with t-test method, until the SVM took the next place of having the highest accuracy to itself with t-test method (79.62%). In the next place according to the feature selection method, decision tree took the next place with Bayes classification (79.13%) and then with SVM classification (78.70%).Conclusion: Bays obtained the better results to itself rather than SVM with t-test
A Moral Framework for Understanding of Fair ML through Economic Models of Equality of Opportunity
We map the recently proposed notions of algorithmic fairness to economic
models of Equality of opportunity (EOP)---an extensively studied ideal of
fairness in political philosophy. We formally show that through our conceptual
mapping, many existing definition of algorithmic fairness, such as predictive
value parity and equality of odds, can be interpreted as special cases of EOP.
In this respect, our work serves as a unifying moral framework for
understanding existing notions of algorithmic fairness. Most importantly, this
framework allows us to explicitly spell out the moral assumptions underlying
each notion of fairness, and interpret recent fairness impossibility results in
a new light. Last but not least and inspired by luck egalitarian models of EOP,
we propose a new family of measures for algorithmic fairness. We illustrate our
proposal empirically and show that employing a measure of algorithmic
(un)fairness when its underlying moral assumptions are not satisfied, can have
devastating consequences for the disadvantaged group's welfare
Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making
We draw attention to an important, yet largely overlooked aspect of
evaluating fairness for automated decision making systems---namely risk and
welfare considerations. Our proposed family of measures corresponds to the
long-established formulations of cardinal social welfare in economics, and is
justified by the Rawlsian conception of fairness behind a veil of ignorance.
The convex formulation of our welfare-based measures of fairness allows us to
integrate them as a constraint into any convex loss minimization pipeline. Our
empirical analysis reveals interesting trade-offs between our proposal and (a)
prediction accuracy, (b) group discrimination, and (c) Dwork et al.'s notion of
individual fairness. Furthermore and perhaps most importantly, our work
provides both heuristic justification and empirical evidence suggesting that a
lower-bound on our measures often leads to bounded inequality in algorithmic
outcomes; hence presenting the first computationally feasible mechanism for
bounding individual-level inequality.Comment: Conference: Thirty-second Conference on Neural Information Processing
Systems (NIPS 2018
Beneficent Intelligence: A Capability Approach to Modeling Benefit, Assistance, and Associated Moral Failures through AI Systems
The prevailing discourse around AI ethics lacks the language and formalism
necessary to capture the diverse ethical concerns that emerge when AI systems
interact with individuals. Drawing on Sen and Nussbaum's capability approach,
we present a framework formalizing a network of ethical concepts and
entitlements necessary for AI systems to confer meaningful benefit or
assistance to stakeholders. Such systems enhance stakeholders' ability to
advance their life plans and well-being while upholding their fundamental
rights. We characterize two necessary conditions for morally permissible
interactions between AI systems and those impacted by their functioning, and
two sufficient conditions for realizing the ideal of meaningful benefit. We
then contrast this ideal with several salient failure modes, namely, forms of
social interactions that constitute unjustified paternalism, coercion,
deception, exploitation and domination. The proliferation of incidents
involving AI in high-stakes domains underscores the gravity of these issues and
the imperative to take an ethics-led approach to AI systems from their
inception
Fair equality of chances for prediction-based decisions
This article presents a fairness principle for evaluating decision-making based on predictions: a decision rule is unfair when the individuals directly impacted by the decisions who are equal with respect to the features that justify inequalities in outcomes do not have the same statistical prospects of being benefited or harmed by them, irrespective of their socially salient morally arbitrary traits. The principle can be used to evaluate prediction-based decision-making from the point of view of a wide range of antecedently specified substantive views about justice in outcome distributions
The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements
Prior work has established the importance of integrating AI ethics topics
into computer and data sciences curricula. We provide evidence suggesting that
one of the critical objectives of AI Ethics education must be to raise
awareness of AI harms. While there are various sources to learn about such
harms, The AI Incident Database (AIID) is one of the few attempts at offering a
relatively comprehensive database indexing prior instances of harms or near
harms stemming from the deployment of AI technologies in the real world. This
study assesses the effectiveness of AIID as an educational tool to raise
awareness regarding the prevalence and severity of AI harms in socially
high-stakes domains. We present findings obtained through a classroom study
conducted at an R1 institution as part of a course focused on the societal and
ethical considerations around AI and ML. Our qualitative findings characterize
students' initial perceptions of core topics in AI ethics and their desire to
close the educational gap between their technical skills and their ability to
think systematically about ethical and societal aspects of their work. We find
that interacting with the database helps students better understand the
magnitude and severity of AI harms and instills in them a sense of urgency
around (a) designing functional and safe AI and (b) strengthening governance
and accountability mechanisms. Finally, we compile students' feedback about the
tool and our class activity into actionable recommendations for the database
development team and the broader community to improve awareness of AI harms in
AI ethics education.Comment: 37 pages, 11 figures; To appear in the proceedings of EAAMO 202
- …