4,210 research outputs found

    Natural Language Processing for Motivational Interviewing Counselling: Addressing Challenges in Resources, Benchmarking and Evaluation

    Get PDF
    Motivational interviewing (MI) is a counselling style often used in healthcare to improve patient health and quality of life by promoting positive behaviour changes. Natural language processing (NLP) has been explored for supporting MI use cases of insights/feedback generation and therapist training, such as automatically assigning behaviour labels to therapist/client utterances and generating possible therapist responses. Despite the progress of NLP for MI applications, significant challenges remain. The most prominent one is the lack of publicly available and annotated MI dialogue corpora due to privacy constraints. Consequently, there is also a lack of common benchmarks and poor reproducibility across studies. Furthermore, human evaluation for therapist response generation is expensive and difficult to scale due to its dependence on MI experts as evaluators. In this thesis, we address these challenges in 4 directions: low-resource NLP modelling, MI dialogue dataset creation, benchmark development for real-world applicable tasks, and laypeople-experts human evaluation study. First, we explore zero-shot binary empathy assessment at the utterance level. We experiment with a supervised approach that trains on heuristically constructed empathy vs. non-empathy contrast in non-therapy dialogues. While this approach has better performance than other models without empathy-aware training, it is still suboptimal and therefore highlights the need for a well-annotated MI dataset. Next, we create AnnoMI, the first publicly available dataset of expert-annotated MI dialogues. It contains MI conversations that demonstrate both high- and low-quality counselling, with extensive annotations by domain experts covering key MI attributes. We also conduct comprehensive analyses of the dataset. Then, we investigate two AnnoMI-based real-world applicable tasks: predicting current-turn therapist/client behaviour given the utterance, and forecasting next-turn therapist behaviour given the dialogue history. We find that language models (LMs) perform well on predicting therapist behaviours with good generalisability to new dialogue topics. However, LMs have suboptimal forecasting performance, which reflects therapists' flexibility where multiple optimal next-turn actions are possible. Lastly, we ask both laypeople and experts to evaluate the generation of a crucial type of therapist responses -- reflection -- on a key quality aspect: coherence and context-consistency. We find that laypeople are a viable alternative to experts, as laypeople show good agreement with each other and correlation with experts. We also find that a large LM generates mostly coherent and consistent reflections. Overall, the work of this thesis broadens access to NLP for MI significantly as well as presents a wide range of findings on related natural language understanding/generation tasks with a real-world focus. Thus, our contributions lay the groundwork for the broader NLP community to be more engaged in research for MI, which will ultimately improve the quality of life for recipients of MI counselling

    ACUTA Journal of Telecommunications in Higher Education

    Get PDF
    In This Issue President\u27s Message From the ACUTA CEO Advertiser Index Cables and the Cloud Snapshot: Spending Update High Expectations for the Campus Network NMSU Builds a Better VoIP LAN Harvard Turns to Technology for Teacher Evaluations Online Education: Interesting but Not Transformational? Campus Innovation and the lnternet of Things Face lt...Google Glass ls Coming Your Way Bandwidth 101 2013 lnstitutional Excellence Awar

    A National Dialogue on Health Information Technology and Privacy

    Get PDF
    Increasingly, government leaders recognize that solving the complex problems facing America today will require more than simply keeping citizens informed. Meeting challenges like rising health care costs, climate change and energy independence requires increased level of collaboration. Traditionally, government agencies have operated in silos -- separated not only from citizens, but from each other, as well. Nevertheless, some have begun to reach across and outside of government to access the collective brainpower of organizations, stakeholders and individuals.The National Dialogue on Health Information Technology and Privacy was one such initiative. It was conceived by leaders in government who sought to demonstrate that it is not only possible, but beneficial and economical, to engage openly and broadly on an issue that is both national in scope and deeply relevant to the everyday lives of citizens. The results of this first-of-its-kind online event are captured in this report, together with important lessons learned along the way.This report served as a call to action. On his first full day in office, President Obama put government on notice that this new, more collaborative model can no longer be confined to the efforts of early adopters. He called upon every executive department and agency to "harness new technology" and make government "transparent, participatory, and collaborative." Government is quickly transitioning to a new generation of managers and leaders, for whom online collaboration is not a new frontier but a fact of everyday life. We owe it to them -- and the citizens we serve -- to recognize and embrace the myriad tools available to fulfill the promise of good government in the 21st Century.Key FindingsThe Panel recommended that the Administration give stakeholders the opportunity to further participate in the discussion of heath IT and privacy through broader outreach and by helping the public to understand the value of a person-centered view of healthcare information technology

    Neural approaches to dialog modeling

    Full text link
    Cette thèse par article se compose de quatre articles qui contribuent au domaine de l’apprentissage profond, en particulier dans la compréhension et l’apprentissage des ap- proches neuronales des systèmes de dialogue. Le premier article fait un pas vers la compréhension si les architectures de dialogue neuronal couramment utilisées capturent efficacement les informations présentes dans l’historique des conversations. Grâce à une série d’expériences de perturbation sur des ensembles de données de dialogue populaires, nous constatons que les architectures de dialogue neuronal couramment utilisées comme les modèles seq2seq récurrents et basés sur des transformateurs sont rarement sensibles à la plupart des perturbations du contexte d’entrée telles que les énoncés manquants ou réorganisés, les mots mélangés, etc. Le deuxième article propose d’améliorer la qualité de génération de réponse dans les systèmes de dialogue de domaine ouvert en modélisant conjointement les énoncés avec les attributs de dialogue de chaque énoncé. Les attributs de dialogue d’un énoncé se réfèrent à des caractéristiques ou des aspects discrets associés à un énoncé comme les actes de dialogue, le sentiment, l’émotion, l’identité du locuteur, la personnalité du locuteur, etc. Le troisième article présente un moyen simple et économique de collecter des ensembles de données à grande échelle pour modéliser des systèmes de dialogue orientés tâche. Cette approche évite l’exigence d’un schéma d’annotation d’arguments complexes. La version initiale de l’ensemble de données comprend 13 215 dialogues basés sur des tâches comprenant six domaines et environ 8 000 entités nommées uniques, presque 8 fois plus que l’ensemble de données MultiWOZ populaire.This thesis by article consists of four articles which contribute to the field of deep learning, specifically in understanding and learning neural approaches to dialog systems. The first article takes a step towards understanding if commonly used neural dialog architectures effectively capture the information present in the conversation history. Through a series of perturbation experiments on popular dialog datasets, wefindthatcommonly used neural dialog architectures like recurrent and transformer-based seq2seq models are rarely sensitive to most input context perturbations such as missing or reordering utterances, shuffling words, etc. The second article introduces a simple and cost-effective way to collect large scale datasets for modeling task-oriented dialog systems. This approach avoids the requirement of a com-plex argument annotation schema. The initial release of the dataset includes 13,215 task-based dialogs comprising six domains and around 8k unique named entities, almost 8 times more than the popular MultiWOZ dataset. The third article proposes to improve response generation quality in open domain dialog systems by jointly modeling the utterances with the dialog attributes of each utterance. Dialog attributes of an utterance refer to discrete features or aspects associated with an utterance like dialog-acts, sentiment, emotion, speaker identity, speaker personality, etc. The final article introduces an embedding-free method to compute word representations on-the-fly. This approach significantly reduces the memory footprint which facilitates de-ployment in on-device (memory constraints) devices. Apart from being independent of the vocabulary size, we find this approach to be inherently resilient to common misspellings

    An AI Chatbot for Explaining Deep Reinforcement Learning Decisions of Service-oriented Systems

    Full text link
    Deep Reinforcement Learning (Deep RL) is increasingly used to cope with the open-world assumption in service-oriented systems. Deep RL was successfully applied to problems such as dynamic service composition, job scheduling, and offloading, as well as service adaptation. While Deep RL offers many benefits, understanding the decision-making of Deep RL is challenging because its learned decision-making policy essentially appears as a black box. Yet, understanding the decision-making of Deep RL is key to help service developers perform debugging, support service providers to comply with relevant legal frameworks, and facilitate service users to build trust. We introduce Chat4XAI to facilitate the understanding of the decision-making of Deep RL by providing natural-language explanations. Compared with visual explanations, the reported benefits of natural-language explanations include better understandability for non-technical users, increased user acceptance and trust, as well as more efficient explanations. Chat4XAI leverages modern AI chatbot technology and dedicated prompt engineering. Compared to earlier work on natural-language explanations using classical software-based dialogue systems, using an AI chatbot eliminates the need for eliciting and defining potential questions and answers up-front. We prototypically realize Chat4XAI using OpenAI's ChatGPT API and evaluate the fidelity and stability of its explanations using an adaptive service exemplar.Comment: To be published at 21st Int'l Conference on Service-Oriented Computing (ICSOC 2023), Rome, Italy, November 28-December 1, 2023, ser. LNCS, F. Monti, S. Rinderle-Ma, A. Ruiz Cortes, Z. Zheng, M. Mecella, Eds., Springer, 202

    Data analytics on key indicators for the city's urban services and dashboards for leadership and decision-making

    Full text link
    Cities are continuously evolving human settlements. Our cities are under strain in an increasingly urbanized world, and planners, decision-makers, and communities must be ready to adapt. Data is an important resource for municipal administration. Some technologies aid in the collection, processing, and visualization of urban data, assisting in the interpretation and comprehension of how urban systems operate. The relationship between data analytics and smart cities has come to light in recent years as interest in both has grown. A sophisticated network of interconnected systems, including planners and inhabitants, is what is known as a smart city. Data analysis has the potential to support data-driven decision-making in the context of smart cities. Both urban managers and residents are becoming more interested in city dashboards. Dashboards may collect, display, analyze, and provide information on regional performance to help smart cities development having sustainability. In order to assist decision-making processes and enhance the performance of cities, we examine how dashboards might be used to acquire accurate and representative information regarding urban challenges. This chapter culminates Data Analytics on key indicators for the city's urban services and dashboards for leadership and decision-making. A single web page with consolidated information, real-time data streams pertinent to planners and decision-makers as well as residents' everyday lives, and site analytics as a method to assess user interactions and preferences are among the proposals for urban dashboards. Keywords: -Dashboard, data analytics, smart city, sustainability
    corecore