6,363 research outputs found
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
We survey 146 papers analyzing "bias" in NLP systems, finding that their
motivations are often vague, inconsistent, and lacking in normative reasoning,
despite the fact that analyzing "bias" is an inherently normative process. We
further find that these papers' proposed quantitative techniques for measuring
or mitigating "bias" are poorly matched to their motivations and do not engage
with the relevant literature outside of NLP. Based on these findings, we
describe the beginnings of a path forward by proposing three recommendations
that should guide work analyzing "bias" in NLP systems. These recommendations
rest on a greater recognition of the relationships between language and social
hierarchies, encouraging researchers and practitioners to articulate their
conceptualizations of "bias"---i.e., what kinds of system behaviors are
harmful, in what ways, to whom, and why, as well as the normative reasoning
underlying these statements---and to center work around the lived experiences
of members of communities affected by NLP systems, while interrogating and
reimagining the power relations between technologists and such communities
Criminal intent or cognitive dissonance: how does student self plagiarism fit into academic integrity?
The discourse of plagiarism is speckled with punitive terms not out of place in a police officer's notes: detection, prevention, misconduct, rules, regulations, conventions, transgression, consequences, deter, trap, etc. This crime and punishment paradigm tends to be the norm in academic settings. The learning and teaching paradigm assumes that students are not filled with criminal intent, but rather are confused by the novel academic culture and its values. The discourse of learning and teaching includes: development, guidance, acknowledge, scholarly practice, communicate, familiarity, culture. Depending on the paradigm adopted, universities, teachers, and students will either focus on policies, punishments, and ways to cheat the system or on program design, assessments, and assimilating the values of academia. Self plagiarism is a pivotal issue that polarises these two paradigms. Viewed from a crime and punishment paradigm, self plagiarism is an intentional act of evading the required workload for a course by re-using previous work. Within a learning and teaching paradigm, self plagiarism is an oxymoron. We would like to explore the differences between these two paradigms by using self plagiarism as a focal point
Measuring justice in machine learning
How can we build more just machine learning systems? To answer this question,
we need to know both what justice is and how to tell whether one system is more
or less just than another. That is, we need both a definition and a measure of
justice. Theories of distributive justice hold that justice can be measured (in
part) in terms of the fair distribution of benefits and burdens across people
in society. Recently, the field known as fair machine learning has turned to
John Rawls's theory of distributive justice for inspiration and
operationalization. However, philosophers known as capability theorists have
long argued that Rawls's theory uses the wrong measure of justice, thereby
encoding biases against people with disabilities. If these theorists are right,
is it possible to operationalize Rawls's theory in machine learning systems
without also encoding its biases? In this paper, I draw on examples from fair
machine learning to suggest that the answer to this question is no: the
capability theorists' arguments against Rawls's theory carry over into machine
learning systems. But capability theorists don't only argue that Rawls's theory
uses the wrong measure, they also offer an alternative measure. Which measure
of justice is right? And has fair machine learning been using the wrong one?Comment: Presented at the ACM Conference on Fairness, Accountability, and
Transparency (30 January 2020) and at the ACM SIGACCESS Conference on
Computers and Accessibility: Workshop on AI Fairness for People with
Disabilities (27 October 2019). Version v2: typos and formatting correcte
Artificial Intelligence Fairness in the Context of Accessibility Research on Intelligent Systems for People who are Deaf or Hard of Hearing
We discuss issues of Artificial Intelligence (AI) fairness for people with disabilities, with examples drawn from our research on human-computer interaction (HCI) for AI-based systems for people who are Deaf or Hard of Hearing (DHH). In particular, we discuss the need for inclusion of data from people with disabilities in training sets, the lack of interpretability of AI systems, ethical responsibilities of access technology researchers and companies, the need for appropriate evaluation metrics for AI-based access technologies (to determine if they are ready to be deployed and if they can be trusted by users), and the ways in which AI systems influence human behavior and influence the set of abilities needed by users to successfully interact with computing systems
Collaborative hybrid agent provision of learner needs using ontology based semantic technology
Š Springer International Publishing AG 2017. This paper describes the use of Intelligent Agents and Ontologies to implement knowledge navigation and learner choice when interacting with complex information locations. The paper is in two parts: the first looks at how Agent Based Semantic Technology can be used to give users a more personalised experience as an individual. The paper then looks to generalise this technology to allow users to work with agents in hybrid group scenarios. In the context of University Learners, the paper outlines how we employ an Ontology of Student Characteristics to personalise information retrieval specifically suited to an individualâs needs. Choice is not a simple âshow me your hand and make me a matchâ but a deliberative artificial intelligence (AI) that uses an ontologically informed agent society to consider the weighted solution paths before choosing the appropriate best. The aim is to enrich the student experience and significantly re-route the studentâs journey. The paper uses knowledge-level interoperation of agents to personalise the learning space of students and deliver to them the information and knowledge to suite them best. The aim is to personalise their learning in the presentation/format that is most appropriate for their needs. The paper then generalises this Semantic Technology Framework using shared vocabulary libraries that enable individuals to work in groups with other agents, which might be other people or actually be AIs. The task they undertake is a formal assessment but the interaction mode is one of informal collaboration. Pedagogically this addresses issues of ensuring fairness between students since we can ensure each has the same experience (as provided by the same set of Agents) as each other and an individual mark may be gained. This is achieved by forming a hybrid group of learner and AI Software Agents. Different agent architectures are discussed and a worked example presented. The work here thus aims at fulfilling the studentâs needs both in the context of matching their needs but also in allowing them to work in an Agent Based Synthetic Group. This in turn opens us new areas of potential collaborative technology
- âŚ