2,665 research outputs found
Recommended from our members
Mistreatment in Childbirth: A mixed-methods approach to understand the mental health sequelae of mistreatment in maternity care among a diverse cohort of birthing persons in New York City
The present study aimed to explore the objective and subjective experiences of “mistreatment” in maternity care in a diverse cohort of women who gave birth in New York City hospitals to identify the prevalence and risk factors of mistreatment and measure the relationship between mistreatment and mental health (Bohren et al., 2015). The study utilized a mixed-methods cross-sectional approach. To collect the quantitative data, 109 participants <1 year postpartum completed an anonymous online survey comprising a self-report measure of demographic, health and mental health information, several mental health questionnaires and two measures of mistreatment in maternity care. 8 of these participants were interviewed about their childbirth experience. The quantitative data was analyzed utilizing linear regression, moderation analysis and path analysis, and the qualitative data was thematically coded then analyzed using Reflexive Thematic (RT) analysis. These data were then triangulated using a mixed-methods model of mistreatment.
In total, 10-15% of the sample experienced mistreatment in the form of Low to Very Low respect and/or autonomy in decision making in their maternity care. Forms of mistreatment included unwanted procedures, provider pressure to undergo procedures, dismissal of women’s concerns, racial discrimination, abandonment, and medical neglect. Approximately 25% of respondents received an unwanted intervention; this was the most significant predictor of mistreatment. This relationship was moderated by race, parity and birth plan. Black, Latinx and Hispanic women experienced the lowest levels of respect in maternity care. Mistreatment in maternity care was correlated with increased risk for postpartum mental illness: decreased respect and autonomy in childbirth was associated with increased postpartum depression and PTSD symptoms.
Eight themes were identified in the qualitative analysis: Discrimination and Unfair Treatment, Confusion and Abandonment, Disregard for Patient Autonomy, Hospital-Level Drivers of Mistreatment, Women Treated as Passive, Normalization of Mistreatment, Self-Advocacy and Vulnerability and, Reclaiming Power through Knowledge. Together, the triangulated mixed- methods data were fit to render a comprehensive “model of mistreatment” to illustrate direct and indirect relationships between mistreatment, mental health, race, trauma history, and childbirth preparation. These findings demonstrate that mistreatment is a multi-determined phenomenon that is interdependent with mental health and requires systematic measurement in healthcare treatment, the integration of anti-racist and patient-centered care and improved childbirth education for patients
UMSL Bulletin 2023-2024
The 2023-2024 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1088/thumbnail.jp
Derogatory, Racist, and Discriminatory Speech (DRDS) in Video Gaming
Video games have been examined for their effects on cognition, learning, health, and physiological arousal, yet research on social dynamics within video gaming is limited. Studies have documented the presence of derogation, racism, and discrimination in this anonymous medium. However, gamers‟ firsthand experiences are typically examined qualitatively. Thus, this study aimed to establish a quantitative baseline for the frequency of derogatory, racist, and discriminatory speech (DRDS) in gaming. DRDS frequency, sexual harassment, and hate speech measures were administered to 150 individuals from online forums and social media groups. Descriptive and inferential analyses were used to gauge which factors affected DRDS rates. Sex, intergroup and fast-paced game types, time played with others, and identity portrayal showed positive correlations with DRDS. Results indicate an array of complex social and developmental factors contribute to experiencing, perceiving, and personally using DRDS. Implications include psychosocial health impacts similar to everyday harassment, with women being at a higher risk and age as a contributing factor
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
UMSL Bulletin 2022-2023
The 2022-2023 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1087/thumbnail.jp
Lost in Translation: Large Language Models in Non-English Content Analysis
In recent years, large language models (e.g., Open AI's GPT-4, Meta's LLaMa,
Google's PaLM) have become the dominant approach for building AI systems to
analyze and generate language online. However, the automated systems that
increasingly mediate our interactions online -- such as chatbots, content
moderation systems, and search engines -- are primarily designed for and work
far more effectively in English than in the world's other 7,000 languages.
Recently, researchers and technology companies have attempted to extend the
capabilities of large language models into languages other than English by
building what are called multilingual language models.
In this paper, we explain how these multilingual language models work and
explore their capabilities and limits. Part I provides a simple technical
explanation of how large language models work, why there is a gap in available
data between English and other languages, and how multilingual language models
attempt to bridge that gap. Part II accounts for the challenges of doing
content analysis with large language models in general and multilingual
language models in particular. Part III offers recommendations for companies,
researchers, and policymakers to keep in mind when considering researching,
developing and deploying large and multilingual language models.Comment: 50 pages, 4 figure
"HOT" ChatGPT: The promise of ChatGPT in detecting and discriminating hateful, offensive, and toxic comments on social media
Harmful content is pervasive on social media, poisoning online communities
and negatively impacting participation. A common approach to address this issue
is to develop detection models that rely on human annotations. However, the
tasks required to build such models expose annotators to harmful and offensive
content and may require significant time and cost to complete. Generative AI
models have the potential to understand and detect harmful content. To
investigate this potential, we used ChatGPT and compared its performance with
MTurker annotations for three frequently discussed concepts related to harmful
content: Hateful, Offensive, and Toxic (HOT). We designed five prompts to
interact with ChatGPT and conducted four experiments eliciting HOT
classifications. Our results show that ChatGPT can achieve an accuracy of
approximately 80% when compared to MTurker annotations. Specifically, the model
displays a more consistent classification for non-HOT comments than HOT
comments compared to human annotations. Our findings also suggest that ChatGPT
classifications align with provided HOT definitions, but ChatGPT classifies
"hateful" and "offensive" as subsets of "toxic." Moreover, the choice of
prompts used to interact with ChatGPT impacts its performance. Based on these
in-sights, our study provides several meaningful implications for employing
ChatGPT to detect HOT content, particularly regarding the reliability and
consistency of its performance, its understand-ing and reasoning of the HOT
concept, and the impact of prompts on its performance. Overall, our study
provides guidance about the potential of using generative AI models to moderate
large volumes of user-generated content on social media
“So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy
Transformative artificially intelligent tools, such as ChatGPT, designed to generate sophisticated text indistinguishable from that produced by a human, are applicable across a wide range of contexts. The technology presents opportunities as well as, often ethical and legal, challenges, and has the potential for both positive and negative impacts for organisations, society, and individuals. Offering multi-disciplinary insight into some of these, this article brings together 43 contributions from experts in fields such as computer science, marketing, information systems, education, policy, hospitality and tourism, management, publishing, and nursing. The contributors acknowledge ChatGPT’s capabilities to enhance productivity and suggest that it is likely to offer significant gains in the banking, hospitality and tourism, and information technology industries, and enhance business activities, such as management and marketing. Nevertheless, they also consider its limitations, disruptions to practices, threats to privacy and security, and consequences of biases, misuse, and misinformation. However, opinion is split on whether ChatGPT’s use should be restricted or legislated. Drawing on these contributions, the article identifies questions requiring further research across three thematic areas: knowledge, transparency, and ethics; digital transformation of organisations and societies; and teaching, learning, and scholarly research. The avenues for further research include: identifying skills, resources, and capabilities needed to handle generative AI; examining biases of generative AI attributable to training datasets and processes; exploring business and societal contexts best suited for generative AI implementation; determining optimal combinations of human and generative AI for various tasks; identifying ways to assess accuracy of text produced by generative AI; and uncovering the ethical and legal issues in using generative AI across different contexts
- …