10 research outputs found
Decoding The Digital Fuku: Deciphering Colonial Legacies to Critically Assess ChatGPT in Dominican Education
Educational disparities within the Dominican Republic (DR) have long-standing
origins rooted in economic, political, and social inequity. Addressing these
challenges has necessarily called for capacity building with respect to
educational materials, high-quality instruction, and structural resourcing.
Generative AI tools like ChatGPT have begun to pique the interest of Dominican
educators due to their perceived potential to bridge these educational gaps.
However, a substantial body of AI fairness literature has documented ways AI
disproportionately reinforces power dynamics reflective of jurisdictions
driving AI development and deployment policies, collectively termed the AI
Global North. As such, indiscriminate adoption of this technology for DR
education, even in part, risks perpetuating forms of digital coloniality.
Therefore, this paper centers embracing AI-facilitated educational reform by
critically examining how AI-driven tools like ChatGPT in DR education may
replicate facets of digital colonialism. We provide a concise overview of
20th-century Dominican education reforms following the 1916 US occupation.
Then, we employ identified neocolonial aspects historically shaping Dominican
education to interrogate the perceived advantages of ChatGPT for contemporary
Dominican education, as outlined by a Dominican scholar. This work invites AI
Global North & South developers, stakeholders, and Dominican leaders alike to
exercise a relational contextualization of data-centric epistemologies like
ChatGPT to reap its transformative benefits while remaining vigilant of
safeguarding Dominican digital sovereignty
Should they? Mobile Biometrics and Technopolicy meet Queer Community Considerations
Smartphones are integral to our daily lives and activities, providing us with
basic functions like texting and phone calls to more complex motion-based
functionalities like navigation, mobile gaming, and fitness-tracking. To
facilitate these functionalities, smartphones rely on integrated sensors like
accelerometers and gyroscopes. These sensors provide personalized measurements
that, in turn, contribute to tasks such as analyzing biometric data for mobile
health purposes. In addition to benefiting smartphone users, biometric data
holds significant value for researchers engaged in biometric identification
research. Nonetheless, utilizing this user data for biometric identification
tasks, such as gait and gender recognition, raises serious privacy, normative,
and ethical concerns, particularly within the queer community. Concerns of
algorithmic bias and algorithmically-driven dysphoria surface from a historical
backdrop of marginalization, surveillance, harassment, discrimination, and
violence against the queer community. In this position paper, we contribute to
the timely discourse on safeguarding human rights within AI-driven systems by
providing a sense of challenges, tensions, and opportunities for new data
protections and biometric collection practices in a way that grapples with the
sociotechnical realities of the queer community.Comment: To appear at 2023 ACM Conference on Equity and Access in Algorithms,
Mechanisms, and Optimizatio
Improving Adversarial Robustness to Sensitivity and Invariance Attacks with Deep Metric Learning
Intentionally crafted adversarial samples have effectively exploited
weaknesses in deep neural networks. A standard method in adversarial robustness
assumes a framework to defend against samples crafted by minimally perturbing a
sample such that its corresponding model output changes. These sensitivity
attacks exploit the model's sensitivity toward task-irrelevant features.
Another form of adversarial sample can be crafted via invariance attacks, which
exploit the model underestimating the importance of relevant features. Previous
literature has indicated a tradeoff in defending against both attack types
within a strictly L_p bounded defense. To promote robustness toward both types
of attacks beyond Euclidean distance metrics, we use metric learning to frame
adversarial regularization as an optimal transport problem. Our preliminary
results indicate that regularizing over invariant perturbations in our
framework improves both invariant and sensitivity defense.Comment: v
Factoring the Matrix of Domination: A Critical Review and Reimagination of Intersectionality in AI Fairness
Intersectionality is a critical framework that, through inquiry and praxis,
allows us to examine how social inequalities persist through domains of
structure and discipline. Given AI fairness' raison d'etre of "fairness", we
argue that adopting intersectionality as an analytical framework is pivotal to
effectively operationalizing fairness. Through a critical review of how
intersectionality is discussed in 30 papers from the AI fairness literature, we
deductively and inductively: 1) map how intersectionality tenets operate within
the AI fairness paradigm and 2) uncover gaps between the conceptualization and
operationalization of intersectionality. We find that researchers
overwhelmingly reduce intersectionality to optimizing for fairness metrics over
demographic subgroups. They also fail to discuss their social context and when
mentioning power, they mostly situate it only within the AI pipeline. We: 3)
outline and assess the implications of these gaps for critical inquiry and
praxis, and 4) provide actionable recommendations for AI fairness researchers
to engage with intersectionality in their work by grounding it in AI
epistemology.Comment: To appear at AIES 202
ChatGPT for Us: Preserving Data Privacy in ChatGPT via Dialogue Text Ambiguation to Expand Mental Health Care Delivery
Large language models have been useful in expanding mental health care
delivery. ChatGPT, in particular, has gained popularity for its ability to
generate human-like dialogue. However, data-sensitive domains -- including but
not limited to healthcare -- face challenges in using ChatGPT due to privacy
and data-ownership concerns. To enable its utilization, we propose a text
ambiguation framework that preserves user privacy. We ground this in the task
of addressing stress prompted by user-provided texts to demonstrate the
viability and helpfulness of privacy-preserved generations. Our results suggest
that chatGPT recommendations are still able to be moderately helpful and
relevant, even when the original user text is not provided
"I'm fully who I am": Towards Centering Transgender and Non-Binary Voices to Measure Biases in Open Language Generation
Transgender and non-binary (TGNB) individuals disproportionately experience
discrimination and exclusion from daily life. Given the recent popularity and
adoption of language generation technologies, the potential to further
marginalize this population only grows. Although a multitude of NLP fairness
literature focuses on illuminating and addressing gender biases, assessing
gender harms for TGNB identities requires understanding how such identities
uniquely interact with societal gender norms and how they differ from gender
binary-centric perspectives. Such measurement frameworks inherently require
centering TGNB voices to help guide the alignment between gender-inclusive NLP
and whom they are intended to serve. Towards this goal, we ground our work in
the TGNB community and existing interdisciplinary literature to assess how the
social reality surrounding experienced marginalization of TGNB persons
contributes to and persists within Open Language Generation (OLG). This social
knowledge serves as a guide for evaluating popular large language models (LLMs)
on two key aspects: (1) misgendering and (2) harmful responses to gender
disclosure. To do this, we introduce TANGO, a dataset of template-based
real-world text curated from a TGNB-oriented community. We discover a dominance
of binary gender norms reflected by the models; LLMs least misgendered subjects
in generated text when triggered by prompts whose subjects used binary
pronouns. Meanwhile, misgendering was most prevalent when triggering generation
with singular they and neopronouns. When prompted with gender disclosures, TGNB
disclosure generated the most stigmatizing language and scored most toxic, on
average. Our findings warrant further research on how TGNB harms manifest in
LLMs and serve as a broader case study toward concretely grounding the design
of gender-inclusive AI in community voices and interdisciplinary literature
Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI Harms
Bias evaluation benchmarks and dataset and model documentation have emerged
as central processes for assessing the biases and harms of artificial
intelligence (AI) systems. However, these auditing processes have been
criticized for their failure to integrate the knowledge of marginalized
communities and consider the power dynamics between auditors and the
communities. Consequently, modes of bias evaluation have been proposed that
engage impacted communities in identifying and assessing the harms of AI
systems (e.g., bias bounties). Even so, asking what marginalized communities
want from such auditing processes has been neglected. In this paper, we ask
queer communities for their positions on, and desires from, auditing processes.
To this end, we organized a participatory workshop to critique and redesign
bias bounties from queer perspectives. We found that when given space, the
scope of feedback from workshop participants goes far beyond what bias bounties
afford, with participants questioning the ownership, incentives, and efficacy
of bounties. We conclude by advocating for community ownership of bounties and
complementing bounties with participatory processes (e.g., co-creation).Comment: To appear at AIES 202
Queer In AI: A Case Study in Community-Led Participatory AI
We present Queer in AI as a case study for community-led participatory design
in AI. We examine how participatory design and intersectional tenets started
and shaped this community's programs over the years. We discuss different
challenges that emerged in the process, look at ways this organization has
fallen short of operationalizing participatory and intersectional principles,
and then assess the organization's impact. Queer in AI provides important
lessons and insights for practitioners and theorists of participatory methods
broadly through its rejection of hierarchy in favor of decentralization,
success at building aid and programs by and for the queer community, and effort
to change actors and institutions outside of the queer community. Finally, we
theorize how communities like Queer in AI contribute to the participatory
design in AI more broadly by fostering cultures of participation in AI,
welcoming and empowering marginalized participants, critiquing poor or
exploitative participatory practices, and bringing participation to
institutions outside of individual research projects. Queer in AI's work serves
as a case study of grassroots activism and participatory methods within AI,
demonstrating the potential of community-led participatory methods and
intersectional praxis, while also providing challenges, case studies, and
nuanced insights to researchers developing and using participatory methods.Comment: To appear at FAccT 202
Improving Adversarial Robustness to Sensitivity and Invariance Attacks with Deep Metric Learning (Student Abstract)
Intentionally crafted adversarial samples have effectively exploited weaknesses in deep neural networks. A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample such that its corresponding model output changes. These sensitivity attacks exploit the model's sensitivity toward task-irrelevant features. Another form of adversarial sample can be crafted via invariance attacks, which exploit the model underestimating the importance of relevant features. Previous literature has indicated a tradeoff in defending against both attack types within a strictly L-p bounded defense. To promote robustness toward both types of attacks beyond Euclidean distance metrics, we use metric learning to frame adversarial regularization as an optimal transport problem. Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense
Adequacy of Existing Surveillance Systems to Monitor Racism, Social Stigma and COVID Inequities: A Detailed Assessment and Recommendations.
The populations impacted most by COVID are also impacted by racism and related social stigma; however, traditional surveillance tools may not capture the intersectionality of these relationships. We conducted a detailed assessment of diverse surveillance systems and databases to identify characteristics, constraints and best practices that might inform the development of a novel COVID surveillance system that achieves these aims. We used subject area expertise, an expert panel and CDC guidance to generate an initial list of N > 50 existing surveillance systems as of 29 October 2020, and systematically excluded those not advancing the project aims. This yielded a final reduced group (n = 10) of COVID surveillance systems (n = 3), other public health systems (4) and systems tracking racism and/or social stigma (n = 3, which we evaluated by using CDC evaluation criteria and Critical Race Theory. Overall, the most important contribution of COVID-19 surveillance systems is their real-time (e.g., daily) or near-real-time (e.g., weekly) reporting; however, they are severely constrained by the lack of complete data on race/ethnicity, making it difficult to monitor racial/ethnic inequities. Other public health systems have validated measures of psychosocial and behavioral factors and some racism or stigma-related factors but lack the timeliness needed in a pandemic. Systems that monitor racism report historical data on, for instance, hate crimes, but do not capture current patterns, and it is unclear how representativeness the findings are. Though existing surveillance systems offer important strengths for monitoring health conditions or racism and related stigma, new surveillance strategies are needed to monitor their intersecting relationships more rigorously