105 research outputs found
The State of Diversity and Inclusion in Apache: A Pulse Check
Diversity and inclusion in open source software (OSS) is a multifaceted
concept that arises from differences in contributors' gender, seniority,
language, region, and other characteristics. D&I has received growing attention
in OSS ecosystems and projects, and various programs have been implemented to
foster contributor diversity. However, we do not yet know how the state of D&I
is evolving. By understanding the state of D&I in OSS projects, the community
can develop new and adjust current strategies to foster diversity among
contributors and gain insights into the mechanisms and processes that
facilitate the development of inclusive communities. In this paper, we report
and compare the results of two surveys of Apache Software Foundation (ASF)
contributors conducted over two years (n=624 & n=432), considering a variety of
D&I aspects. We see improvements in engagement among those traditionally
underrepresented in OSS, particularly those who are in gender minority or not
confident in English. Yet, the gender gap in the number of contributors
remains. We expect this study to help communities tailor their efforts in
promoting D&I in OSS.Comment: 11 pages, 1 figur
Can AI Serve as a Substitute for Human Subjects in Software Engineering Research?
Research within sociotechnical domains, such as Software Engineering,
fundamentally requires a thorough consideration of the human perspective.
However, traditional qualitative data collection methods suffer from challenges
related to scale, labor intensity, and the increasing difficulty of participant
recruitment. This vision paper proposes a novel approach to qualitative data
collection in software engineering research by harnessing the capabilities of
artificial intelligence (AI), especially large language models (LLMs) like
ChatGPT. We explore the potential of AI-generated synthetic text as an
alternative source of qualitative data, by discussing how LLMs can replicate
human responses and behaviors in research settings. We examine the application
of AI in automating data collection across various methodologies, including
persona-based prompting for interviews, multi-persona dialogue for focus
groups, and mega-persona responses for surveys. Additionally, we discuss the
prospective development of new foundation models aimed at emulating human
behavior in observational studies and user evaluations. By simulating human
interaction and feedback, these AI models could offer scalable and efficient
means of data generation, while providing insights into human attitudes,
experiences, and performance. We discuss several open problems and research
opportunities to implement this vision and conclude that while AI could augment
aspects of data gathering in software engineering research, it cannot replace
the nuanced, empathetic understanding inherent in human subjects in some cases,
and an integrated approach where both AI and human-generated data coexist will
likely yield the most effective outcomes
Development Context Driven Change Awareness and Analysis Framework
Recent work on workspace monitoring allows conflict prediction early in the development process, however, these approaches mostly use syntactic differencing techniques to compare different program versions. In contrast, traditional change-impact analysis techniques analyze related versions of the program only after the code has been checked into the master repository. We propose a novel approach, De- CAF (Development Context Analysis Framework), that leverages the development context to scope a change impact analysis technique. The goal is to characterize the impact of each developer on other developers in the team. There are various client applications such as task prioritization, early conflict detection, and providing advice on testing that can benefit from such a characterization. The DeCAF framework leverages information from the development context to bound the iDiSE change impact analysis technique to analyze only the parts of the code base that are of interest. Bounding the analysis can enable DeCAF to efficiently compute the impact of changes using a combination of program dependence and symbolic execution based approaches
SocioEconomicMag Meets a Platform for SES-Diverse College Students: A Case Study
Emerging research shows that individual differences in how people use
technology sometimes cluster by socioeconomic status (SES) and that when
technology is not socioeconomically inclusive, low-SES individuals may abandon
it. To understand how to improve technology's SES-inclusivity, we present a
multi-phase case study on SocioEconomicMag (SESMag), an emerging inspection
method for socio+economic inclusivity. In our 16-month case study, a software
team developing a learning management platform used SESMag to evaluate and then
to improve their platform's SES-inclusivity. The results showed that (1) the
practitioners identified SES-inclusivity bugs in 76% of the features they
evaluated; (2) these inclusivity bugs actually arise among low-SES college
students; and (3) the SESMag process pointed ways towards fixing these bugs.
Finally, (4) a user study with SES-diverse college students showed that the
platform's SES-inclusivity eradicated 45-54% of the bugs; for some types of
bugs, the bug instance eradication rate was 80% or higher.Comment: 26 pages, 7 figure
Recommended from our members
How to Debug Inclusivity Bugs? An Empirical Investigation of Finding-to-Fixing with Information Architecture
Background: Although some previous research has found ways to find inclusivity bugs (biases in software that introduce inequities among cognitively diverse individuals), little attention has been paid to how to go about fixing such bugs. We hypothesized that Information Architecture (IA)--the way information is organized, structured and labeled--may provide the missing link from finding inclusivity bugs in information-intensive technology to fixing them. Aims: To investigate whether Information Architecture provides an effective way to remove inclusivity bugs from technology, we created Why/Where/Fix, an inclusivity debugging paradigm that adds inclusivity fault localization via IA. Method: We conducted a qualitative empirical investigation in three stages. (Stage 1): An Open Source (OSS) team used the Why (which cognitive styles) and Where (which IA) parts to guide their understanding of inclusivity bugs in their OSS project’s infrastructure. (Stage 2): The OSS team used the outcomes of Stage One to produce IA-based fixes (Fix) to the inclusivity bugs they had found. (Stage 3): We brought OSS newcomers into the lab to see whether and how the IA-based fixes had improved equity and inclusion across cognitively diverse OSS newcomers. Results: Information Architecture was a source of numerous inclusivity bugs. The OSS team's use of IA to fix these bugs reduced the number of inclusivity bugs participants experienced by 90%. Conclusions: These results provide encouraging evidence that using IA through Why/Where/Fix can help technologists to address inclusivity bugs in information-intensive technologies such as OSS project infrastructures
- …