19,446 research outputs found

    Hi, how can I help you?: Automating enterprise IT support help desks

    Full text link
    Question answering is one of the primary challenges of natural language understanding. In realizing such a system, providing complex long answers to questions is a challenging task as opposed to factoid answering as the former needs context disambiguation. The different methods explored in the literature can be broadly classified into three categories namely: 1) classification based, 2) knowledge graph based and 3) retrieval based. Individually, none of them address the need of an enterprise wide assistance system for an IT support and maintenance domain. In this domain the variance of answers is large ranging from factoid to structured operating procedures; the knowledge is present across heterogeneous data sources like application specific documentation, ticket management systems and any single technique for a general purpose assistance is unable to scale for such a landscape. To address this, we have built a cognitive platform with capabilities adopted for this domain. Further, we have built a general purpose question answering system leveraging the platform that can be instantiated for multiple products, technologies in the support domain. The system uses a novel hybrid answering model that orchestrates across a deep learning classifier, a knowledge graph based context disambiguation module and a sophisticated bag-of-words search system. This orchestration performs context switching for a provided question and also does a smooth hand-off of the question to a human expert if none of the automated techniques can provide a confident answer. This system has been deployed across 675 internal enterprise IT support and maintenance projects.Comment: To appear in IAAI 201

    Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation

    Get PDF
    This paper surveys the current state of the art in Natural Language Generation (NLG), defined as the task of generating text or speech from non-linguistic input. A survey of NLG is timely in view of the changes that the field has undergone over the past decade or so, especially in relation to new (usually data-driven) methods, as well as new applications of NLG technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in NLG and the architectures adopted in which such tasks are organised; (b) highlight a number of relatively recent research topics that have arisen partly as a result of growing synergies between NLG and other areas of artificial intelligence; (c) draw attention to the challenges in NLG evaluation, relating them to similar challenges faced in other areas of Natural Language Processing, with an emphasis on different evaluation methods and the relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118 pages, 8 figures, 1 tabl

    DR.BENCH: Diagnostic Reasoning Benchmark for Clinical Natural Language Processing

    Full text link
    The meaningful use of electronic health records (EHR) continues to progress in the digital era with clinical decision support systems augmented by artificial intelligence. A priority in improving provider experience is to overcome information overload and reduce the cognitive burden so fewer medical errors and cognitive biases are introduced during patient care. One major type of medical error is diagnostic error due to systematic or predictable errors in judgment that rely on heuristics. The potential for clinical natural language processing (cNLP) to model diagnostic reasoning in humans with forward reasoning from data to diagnosis and potentially reduce the cognitive burden and medical error has not been investigated. Existing tasks to advance the science in cNLP have largely focused on information extraction and named entity recognition through classification tasks. We introduce a novel suite of tasks coined as Diagnostic Reasoning Benchmarks, DR.BENCH, as a new benchmark for developing and evaluating cNLP models with clinical diagnostic reasoning ability. The suite includes six tasks from ten publicly available datasets addressing clinical text understanding, medical knowledge reasoning, and diagnosis generation. DR.BENCH is the first clinical suite of tasks designed to be a natural language generation framework to evaluate pre-trained language models. Experiments with state-of-the-art pre-trained generative language models using large general domain models and models that were continually trained on a medical corpus demonstrate opportunities for improvement when evaluated in DR. BENCH. We share DR. BENCH as a publicly available GitLab repository with a systematic approach to load and evaluate models for the cNLP community.Comment: Under revie

    How Can Transformer Models Shape Future Healthcare: A Qualitative Study

    Get PDF
    Transformer models have been successfully applied to various natural language processing and machine translation tasks in recent years, e.g. automatic language understanding. With the advent of more efficient and reliable models (e.g. GPT-3), there is a growing potential for automating time-consuming tasks that could be of particular benefit in healthcare to improve clinical outcomes. This paper aims at summarizing potential use cases of transformer models for future healthcare applications. Precisely, we conducted a survey asking experts on their ideas and reflections for future use cases. We received 28 responses, analyzed using an adapted thematic analysis. Overall, 8 use case categories were identified including documentation and clinical coding, workflow and healthcare services, decision support, knowledge management, interaction support, patient education, health management, and public health monitoring. Future research should consider developing and testing the application of transformer models for such use cases

    How Can Transformer Models Shape Future Healthcare: A Qualitative Study

    Get PDF
    Transformer models have been successfully applied to various natural language processing and machine translation tasks in recent years, e.g. automatic language understanding. With the advent of more efficient and reliable models (e.g. GPT-3), there is a growing potential for automating time-consuming tasks that could be of particular benefit in healthcare to improve clinical outcomes. This paper aims at summarizing potential use cases of transformer models for future healthcare applications. Precisely, we conducted a survey asking experts on their ideas and reflections for future use cases. We received 28 responses, analyzed using an adapted thematic analysis. Overall, 8 use case categories were identified including documentation and clinical coding, workflow and healthcare services, decision support, knowledge management, interaction support, patient education, health management, and public health monitoring. Future research should consider developing and testing the application of transformer models for such use cases

    Cross-sectional evaluation of a longitudinal consultation skills course at a new UK medical school

    Get PDF
    Background: Good communication is a crucial element of good clinical care, and it is important to provide appropriate consultation skills teaching in undergraduate medical training to ensure that doctors have the necessary skills to communicate effectively with patients and other key stakeholders. This article aims to provide research evidence of the acceptability of a longitudinal consultation skills strand in an undergraduate medical course, as assessed by a cross-sectional evaluation of students' perceptions of their teaching and learning experiences. Methods: A structured questionnaire was used to collect student views. The questionnaire comprised two parts: 16 closed questions to evaluate content and process of teaching and 5 open-ended questions. Questionnaires were completed at the end of each consultation skills session across all year groups during the 2006-7 academic year (5 sessions in Year 1, 3 in Year 2, 3 in Year 3, 10 in Year 4 and 10 in Year 5). 2519 questionnaires were returned in total. Results: Students rated Tutor Facilitation most favourably, followed by Teaching, then Practice & Feedback, with suitability of the Rooms being most poorly rated. All years listed the following as important aspects they had learnt during the session: • how to structure the consultation • importance of patient-centredness • aspects of professionalism (including recognising own limits, being prepared, generally acting professionally). All years also noted that the sessions had increased their confidence, particularly through practice. Conclusions: Our results suggest that a longitudinal and integrated approach to teaching consultation skills using a well structured model such as Calgary-Cambridge, facilitates and consolidates learning of desired process skills, increases student confidence, encourages integration of process and content, and reinforces appreciation of patient-centredness and professionalism
    • …
    corecore