26 research outputs found

    Identifying Gendered Language

    Get PDF
    Gendered language refers to the use of words that indicate the gender of an individual. It can be explicit, where the gender is directly implied by the specific words used (e.g., mother, she, man), or it can be implicit, where societal roles and behaviors convey a person\u27s gender. For example, expectations that women display communal traits (e.g., affectionate, caring, gentle) and men display agentic traits (e.g., assertive, competitive, decisive). The presence of gendered language in natural language processing (NLP) systems can reinforce gender stereotypes and bias. Our work introduces an approach to creating gendered language datasets using ChatGPT. These datasets are designed to support data-driven methods for identifying gender stereotypes and mitigating gender bias. The approach focuses on generating implicit gendered language that captures and reflects stereotypical characteristics or traits associated with a specific gender. This is achieved by constructing prompts for ChatGPT that incorporate gender-coded words sourced from gender-coded lexicons. The evaluation of the datasets generated demonstrates good examples of English-language gendered sentences that can be categorized as either contradictory to or consistent with gender stereotypes. Additionally, the generated data exhibits a strong gender bias.https://arrow.tudublin.ie/cddpos/1007/thumbnail.jp

    Designing Technology to Support Safety for Transgender Women & Non-Binary People of Color

    Full text link
    This work provides a preliminary understanding of how transgender women and non-binary people of color experience violence and manage safety, and what opportunities exist for HCI to support the safety needs of this community. We conducted nine interviews to understand how participants practice safety and what role technology played, if any, in these experiences. Interviewees expressed physical and psychological safety concerns, and managed safety by informing friends of their location using digital technologies, making compromises, and avoiding law enforcement. We designed U-Signal, a wearable technology and accompanying smartphone application prototype to increase physical safety and decrease safety concerns, reduce violence, and help build community.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/154051/1/StarksDesigningTechnology.pdfDescription of StarksDesigningTechnology.pdf : Main articl

    CHInclusion: Working toward a more inclusive HCI community

    Get PDF
    HCI has a growing body of work regarding important social and community issues, as well as various grassroots communities working to make CHI more international and inclusive. In this workshop, we will build on this work: first reflecting on the contemporary CHI climate, and then developing an actionable plan towards making CHI2019 and subsequent SIGCHI events and sister conferences more inclusive for all

    50 Years of Test (Un)fairness: Lessons for Machine Learning

    Full text link
    Quantitative definitions of what is unfair and what is fair have been introduced in multiple disciplines for well over 50 years, including in education, hiring, and machine learning. We trace how the notion of fairness has been defined within the testing communities of education and hiring over the past half century, exploring the cultural and social context in which different fairness definitions have emerged. In some cases, earlier definitions of fairness are similar or identical to definitions of fairness in current machine learning research, and foreshadow current formal work. In other cases, insights into what fairness means and how to measure it have largely gone overlooked. We compare past and current notions of fairness along several dimensions, including the fairness criteria, the focus of the criteria (e.g., a test, a model, or its use), the relationship of fairness to individuals, groups, and subgroups, and the mathematical method for measuring fairness (e.g., classification, regression). This work points the way towards future research and measurement of (un)fairness that builds from our modern understanding of fairness while incorporating insights from the past.Comment: FAT* '19: Conference on Fairness, Accountability, and Transparency (FAT* '19), January 29--31, 2019, Atlanta, GA, US

    Hidden figures : reframing gender prototyping from a communication science perspective

    Get PDF
    Abstract: On 26 July 2013, the United Nations launched the Free & Equal campaign in Cape Town, South Africa, to mark the global commitment to end gender discrimination. This event can be positioned in the “fourth wave of feminism” referred to by leading feminist scholars, such as Gouws (2010). However, while multiple disciplinary discourses herald the progress with regards to women’s liberation, current developments pertaining to gender identities in particular, illuminate that in spite of winning a number of battles along the way, the wars on exclusion, discrimination, patriarchy and misogyny have not yet ended. This article aims to reflect on the current status quo of feminism by drawing on the work of seminal communication scholars, such as Herbert Mead, Irving Goffman and Serge Moscovici whose work on individual and social identity sheds light on the processes of gender prototyping that are rapidly changing. At present, the United Nations recognises 71 gender identities, while hegemonic heterosexual domination and discrimination still persist regardless of legislation and activism aimed at inclusion and non-discrimination of all gender identities. An overview of current research findings illuminates the need for employee activism and the development of representative woman gender prototypes in particular, to harness cultures of inclusivity and non-discrimination in the workplace

    Improving fairness in machine learning systems: What do industry practitioners need?

    Full text link
    The potential for machine learning (ML) systems to amplify social inequities and unfairness is receiving increasing popular and academic attention. A surge of recent work has focused on the development of algorithmic tools to assess and mitigate such unfairness. If these tools are to have a positive impact on industry practice, however, it is crucial that their design be informed by an understanding of real-world needs. Through 35 semi-structured interviews and an anonymous survey of 267 ML practitioners, we conduct the first systematic investigation of commercial product teams' challenges and needs for support in developing fairer ML systems. We identify areas of alignment and disconnect between the challenges faced by industry practitioners and solutions proposed in the fair ML research literature. Based on these findings, we highlight directions for future ML and HCI research that will better address industry practitioners' needs.Comment: To appear in the 2019 ACM CHI Conference on Human Factors in Computing Systems (CHI 2019

    Designing Trans Technology: Defining Challenges and Envisioning Community-Centered Solutions

    Get PDF
    Transgender and non-binary people face substantial challenges in the world, ranging from social inequities and discrimination to lack of access to resources. Though technology cannot fully solve these problems, technological solutions may help to address some of the challenges trans people and communities face. We conducted a series of participatory design sessions (total N = 21 participants) to understand trans people’s most pressing challenges and to involve this population in the design process. We detail four types of technologies trans people envision: technologies for changing bodies, technologies for changing appearances / gender expressions, technologies for safety, and technologies for finding resources. We found that centering trans people in the design process enabled inclusive technology design that primarily focused on sharing community resources and prioritized connection between community members.Institute for Research on Women and Gender (IRWG)Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/153781/1/designing_trans_technologies_paper___camera_ready v2.pdfDescription of designing_trans_technologies_paper___camera_ready v2.pdf : Main articl

    Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning

    Full text link
    A growing body of work shows that many problems in fairness, accountability, transparency, and ethics in machine learning systems are rooted in decisions surrounding the data collection and annotation process. In spite of its fundamental nature however, data collection remains an overlooked part of the machine learning (ML) pipeline. In this paper, we argue that a new specialization should be formed within ML that is focused on methodologies for data collection and annotation: efforts that require institutional frameworks and procedures. Specifically for sociocultural data, parallels can be drawn from archives and libraries. Archives are the longest standing communal effort to gather human information and archive scholars have already developed the language and procedures to address and discuss many challenges pertaining to data collection such as consent, power, inclusivity, transparency, and ethics & privacy. We discuss these five key approaches in document collection practices in archives that can inform data collection in sociocultural ML. By showing data collection practices from another field, we encourage ML research to be more cognizant and systematic in data collection and draw from interdisciplinary expertise.Comment: To be published in Conference on Fairness, Accountability, and Transparency FAT* '20, January 27-30, 2020, Barcelona, Spain. ACM, New York, NY, USA, 11 page
    corecore