3,112 research outputs found

    Designing AI-Based Systems for Qualitative Data Collection and Analysis

    Get PDF
    With the continuously increasing impact of information systems (IS) on private and professional life, it has become crucial to integrate users in the IS development process. One of the critical reasons for failed IS projects is the inability to accurately meet user requirements, resulting from an incomplete or inaccurate collection of requirements during the requirements elicitation (RE) phase. While interviews are the most effective RE technique, they face several challenges that make them a questionable fit for the numerous, heterogeneous, and geographically distributed users of contemporary IS. Three significant challenges limit the involvement of a large number of users in IS development processes today. Firstly, there is a lack of tool support to conduct interviews with a wide audience. While initial studies show promising results in utilizing text-based conversational agents (chatbots) as interviewer substitutes, we lack design knowledge for designing AI-based chatbots that leverage established interviewing techniques in the context of RE. By successfully applying chatbot-based interviewing, vast amounts of qualitative data can be collected. Secondly, there is a need to provide tool support enabling the analysis of large amounts of qualitative interview data. Once again, while modern technologies, such as machine learning (ML), promise remedy, concrete implementations of automated analysis for unstructured qualitative data lag behind the promise. There is a need to design interactive ML (IML) systems for supporting the coding process of qualitative data, which centers around simple interaction formats to teach the ML system, and transparent and understandable suggestions to support data analysis. Thirdly, while organizations rely on online feedback to inform requirements without explicitly conducting RE interviews (e.g., from app stores), we know little about the demographics of who is giving feedback and what motivates them to do so. Using online feedback as requirement source risks including solely the concerns and desires of vocal user groups. With this thesis, I tackle these three challenges in two parts. In part I, I address the first and the second challenge by presenting and evaluating two innovative AI-based systems, a chatbot for requirements elicitation and an IML system to semi-automate qualitative coding. In part II, I address the third challenge by presenting results from a large-scale study on IS feedback engagement. With both parts, I contribute with prescriptive knowledge for designing AI-based qualitative data collection and analysis systems and help to establish a deeper understanding of the coverage of existing data collected from online sources. Besides providing concrete artifacts, architectures, and evaluations, I demonstrate the application of a chatbot interviewer to understand user values in smartphones and provide guidance for extending feedback coverage from underrepresented IS user groups

    Towards More Efficient Behavioral Coding of Teamwork via Machine Learning

    Get PDF
    Teams have been an integral part of organizational success for several decades and as such, researchers have sought to better understand all aspects of work teams. To better inform research and practice, Marks, Mathieu and Zaccaro (2001) advanced a theory and framework of team processes that has become a seminal piece in our field. Their theory proposed that ten team processes could be mapped on to three second order constructs (transition, action, and interpersonal phases). Mathieu and colleagues (2019) developed and validated a measure designed specifically to align with Marks et al. (2001) framework. While much needed, this measure is not without limitations, namely its self-report nature and associated subjectivity. The current study proposes a means for overcoming those limitations by using machine learning to automate the Mathieu et al. (2019) measure. This study used traditional human coding methods to code data from three different sources to include teams across various contexts. Data was used from NASA HERA teams, medical teams, and student engineering teams. Then, the researcher trained various models using Natural Language Classifier software (provided through IBM Watson) to create an automated coding scheme. The results of this study are mixed. Using Natural Language Classifier, various models were trained and tested according to the Marks et al. (2001) framework. However, once tested, the accuracy of the model was not up to standard. This study provides a fruitful avenue for future research; the models can be refined by collecting further data and then retraining the models

    Algorithmic governance: Developing a research agenda through the power of collective intelligence

    Get PDF
    We are living in an algorithmic age where mathematics and computer science are coming together in powerful new ways to influence, shape and guide our behaviour and the governance of our societies. As these algorithmic governance structures proliferate, it is vital that we ensure their effectiveness and legitimacy. That is, we need to ensure that they are an effective means for achieving a legitimate policy goal that are also procedurally fair, open and unbiased. But how can we ensure that algorithmic governance structures are both? This article shares the results of a collective intelligence workshop that addressed exactly this question. The workshop brought together a multidisciplinary group of scholars to consider (a) barriers to legitimate and effective algorithmic governance and (b) the research methods needed to address the nature and impact of specific barriers. An interactive management workshop technique was used to harness the collective intelligence of this multidisciplinary group. This method enabled participants to produce a framework and research agenda for those who are concerned about algorithmic governance. We outline this research agenda below, providing a detailed map of key research themes, questions and methods that our workshop felt ought to be pursued. This builds upon existing work on research agendas for critical algorithm studies in a unique way through the method of collective intelligence

    CoAIcoder: Examining the Effectiveness of AI-assisted Collaborative Qualitative Analysis

    Full text link
    While the domain of individual-level AI-assisted analysis has been extensively explored in previous studies, the field of AI-assisted collaborative qualitative analysis remains relatively unexplored. After identifying CQA practices and design opportunities through formative interviews, we introduce our collaborative qualitative coding tool, CoAIcoder, and designed the four different collaboration methods. We subsequently implemented a between-subject design involving 32 pairs of users who have undergone training in CQA across three commonly utilized phases under four methods. Our results suggest that CoAIcoder, which employs AI and a Shared Model, could potentially improve the efficiency of the coding process in CQA by fostering a quicker shared understanding and promoting early-stage discussions. However, this may come with the potential downside of reduced code diversity. We also underscored the existence of a trade-off between the level of independence and the coding outcome when humans collaborate during the early coding stages. Lastly, we identify design implications that could inspire and inform the future design of CQA systems

    Presence of Social Presence during Disasters

    Get PDF
    During emergencies, affected people use social media platforms for interaction and collaboration. Social media is used to ask for help, provide moral support, and to help each other, without direct face-to-face interactions. From a social presence point of view, we analyzed Twitter messages to understand how people cooperate and collaborate with each other during heavy rains and subsequent floods in Chennai, India. We conducted a manual content analysis to build social presence classifiers comprising intimacy and immediacy concepts which we used to train a machine learning approach to subsequently analyze the whole dataset of 1.65 million tweets. The results showed that the majority of the immediacy tweets are conveying the needs and urgencies of affected people requesting for help. We argue that during disasters, the online social presence creates a sense of responsibility and common identity among the social media users to participate in relief activities

    The right kind of explanation: Validity in automated hate speech detection

    Get PDF
    To quickly identify hate speech online, communication research offers a useful tool in the form of automatic content analysis. However, the combined methods of standardized manual content analysis and supervised text classification demand different quality criteria. This chapter shows that a more substantial examination of validity is necessary since models often learn on spurious correlations or biases, and researchers run the risk of drawing wrong inferences. To investigate the overlap of theoretical concepts with technological operationalization, explainability methods are evaluated to explain what a model has learned. These methods proved to be of limited use in testing the validity of a model when the generated explanations aim at sense-making rather than faithfulness to the model. The chapter ends with recommendations for further interdisciplinary development of automatic content analysis
    • …
    corecore