19,471 research outputs found

    The pragmatics of specialized communication

    Get PDF
    El presente artículo pretende poner de manifiesto la importancia de la pragmática en relación con la comunicación especializada. La estructura, el contenido y la terminología de los textos especializados se ven afectados por factores como la propia situación comunicativa y el conocimiento, intenciones, expectativas y creencias previos del emisor del texto. La transmisión de tal significado es difícil incluso en una sola lengua. Cuando la transmisión se produce entre dos lenguas, como es el caso de cualquier acto de traducción, las dificultades se multiplican. Por esta razón, es fundamental que los traductores sean conscientes de cómo la pragmática, más que ningún otro componente del lenguaje, puede afectar de forma decisiva a su actividad profesional

    Enhance the Language Ability of Humanoid Robot NAO through Deep Learning to Interact with Autistic Children

    Get PDF
    Autism spectrum disorder (ASD) is a life-long neurological disability, and a cure has not yet been found. ASD begins early in childhood and lasts throughout a person’s life. Through early intervention, many actions can be taken to improve the quality of life of children. Robots are one of the best choices for accompanying children with autism. However, for most robots, the dialogue system uses traditional techniques to produce responses. Robots cannot produce meaningful answers when the conversations have not been recorded in a database. The main contribution of our work is the incorporation of a conversation model into an actual robot system for supporting children with autism. We present the use a neural network model as the generative conversational agent, which aimed at generating meaningful and coherent dialogue responses given the dialogue history. The proposed model shares an embedding layer between the encoding and decoding processes through adoption. The model is different from the canonical Seq2Seq model in which the encoder output is used only to set-up the initial state of the decoder to avoid favoring short and unconditional responses with high prior probability. In order to improve the sensitivity to context, we changed the input method of the model to better adapt to the utterances of children with autism. We adopted transfer learning to make the proposed model learn the characteristics of dialogue with autistic children and to solve the problem of the insufficient corpus of dialogue. Experiments showed that the proposed method was superior to the canonical Seq2sSeq model and the GAN-based dialogue model in both automatic evaluation indicators and human evaluation, including pushing the BLEU precision to 0.23, the greedy matching score to 0.69, the embedding average score to 0.82, the vector extrema score to 0.55, the skip-thought score to 0.65, the KL divergence score to 5.73, and the EMD score to 12.21

    How should a virtual agent present psychoeducation?

    Get PDF
    BACKGROUND AND OBJECTIVE: With the rise of autonomous e-mental health applications, virtual agents can play a major role in improving trustworthiness, therapy outcome and adherence. In these applications, it is important that patients adhere in the sense that they perform the tasks, but also that they adhere to the specific recommendations on how to do them well. One important construct in improving adherence is psychoeducation, information on the why and how of therapeutic interventions. In an e-mental health context, this can be delivered in two different ways: verbally by a (virtual) embodied conversational agent or just via text on the scree

    Towards Explainable and Safe Conversational Agents for Mental Health: A Survey

    Full text link
    Virtual Mental Health Assistants (VMHAs) are seeing continual advancements to support the overburdened global healthcare system that gets 60 million primary care visits, and 6 million Emergency Room (ER) visits annually. These systems are built by clinical psychologists, psychiatrists, and Artificial Intelligence (AI) researchers for Cognitive Behavioral Therapy (CBT). At present, the role of VMHAs is to provide emotional support through information, focusing less on developing a reflective conversation with the patient. A more comprehensive, safe and explainable approach is required to build responsible VMHAs to ask follow-up questions or provide a well-informed response. This survey offers a systematic critical review of the existing conversational agents in mental health, followed by new insights into the improvements of VMHAs with contextual knowledge, datasets, and their emerging role in clinical decision support. We also provide new directions toward enriching the user experience of VMHAs with explainability, safety, and wholesome trustworthiness. Finally, we provide evaluation metrics and practical considerations for VMHAs beyond the current literature to build trust between VMHAs and patients in active communications.Comment: 10 pages, 3 figures, 2 table

    Before they can teach they must talk : on some aspects of human-computer interaction

    Get PDF

    Technical Metrics Used to Evaluate Health Care Chatbots: Scoping Review

    Get PDF
    Dialog agents (chatbots) have a long history of application in health care, where they have been used for tasks such as supporting patient self-management and providing counseling. Their use is expected to grow with increasing demands on health systems and improving artificial intelligence (AI) capability. Approaches to the evaluation of health care chatbots, however, appear to be diverse and haphazard, resulting in a potential barrier to the advancement of the field. This study aims to identify the technical (nonclinical) metrics used by previous studies to evaluate health care chatbots. Studies were identified by searching 7 bibliographic databases (eg, MEDLINE and PsycINFO) in addition to conducting backward and forward reference list checking of the included studies and relevant reviews. The studies were independently selected by two reviewers who then extracted data from the included studies. Extracted data were synthesized narratively by grouping the identified metrics into categories based on the aspect of chatbots that the metrics evaluated. Of the 1498 citations retrieved, 65 studies were included in this review. Chatbots were evaluated using 27 technical metrics, which were related to chatbots as a whole (eg, usability, classifier performance, speed), response generation (eg, comprehensibility, realism, repetitiveness), response understanding (eg, chatbot understanding as assessed by users, word error rate, concept error rate), and esthetics (eg, appearance of the virtual agent, background color, and content). The technical metrics of health chatbot studies were diverse, with survey designs and global usability metrics dominating. The lack of standardization and paucity of objective measures make it difficult to compare the performance of health chatbots and could inhibit advancement of the field. We suggest that researchers more frequently include metrics computed from conversation logs. In addition, we recommend the development of a framework of technical metrics with recommendations for specific circumstances for their inclusion in chatbot studies
    corecore