298 research outputs found

    Usefulness, localizability, humanness, and language-benefit: additional evaluation criteria for natural language dialogue systems

    Get PDF
    Human–computer dialogue systems interact with human users using natural language. We used the ALICE/AIML chatbot architecture as a platform to develop a range of chatbots covering different languages, genres, text-types, and user-groups, to illustrate qualitative aspects of natural language dialogue system evaluation. We present some of the different evaluation techniques used in natural language dialogue systems, including black box and glass box, comparative, quantitative, and qualitative evaluation. Four aspects of NLP dialogue system evaluation are often overlooked: “usefulness” in terms of a user’s qualitative needs, “localizability” to new genres and languages, “humanness” or “naturalness” compared to human–human dialogues, and “language benefit” compared to alternative interfaces. We illustrated these aspects with respect to our work on machine-learnt chatbot dialogue systems; we believe these aspects are worthwhile in impressing potential new users and customers

    Statistical natural language processing methods for intelligent process automation

    Get PDF
    Nowadays, digitization is transforming the way businesses work. Recently, Artificial Intelligence (AI) techniques became an essential part of the automation of business processes: In addition to cost advantages, these techniques offer fast processing times and higher customer satisfaction rates, thus ultimately increasing sales. One of the intelligent approaches for accelerating digital transformation in companies is the Robotic Process Automation (RPA). An RPA-system is a software tool that robotizes routine and time-consuming responsibilities such as email assessment, various calculations, or creation of documents and reports (Mohanty and Vyas, 2018). Its main objective is to organize a smart workflow and therethrough to assist employees by offering them more scope for cognitively demanding and engaging work. Intelligent Process Automation (IPA) offers all these advantages as well; however, it goes beyond the RPA by adding AI components such as Machine- and Deep Learning techniques to conventional automation solutions. Previously, IPA approaches were primarily employed within the computer vision domain. However, in recent times, Natural Language Processing (NLP) became one of the potential applications for IPA as well due to its ability to understand and interpret human language. Usually, NLP methods are used to analyze large amounts of unstructured textual data and to respond to various inquiries. However, one of the central applications of NLP within the IPA domain – are conversational interfaces (e.g., chatbots, virtual agents) that are used to enable human-to-machine communication. Nowadays, conversational agents gain enormous demand due to their ability to support a large number of users simultaneously while communicating in a natural language. The implementation of a conversational agent comprises multiple stages and involves diverse types of NLP sub-tasks, starting with natural language understanding (e.g., intent recognition, named entity extraction) and going towards dialogue management (i.e., determining the next possible bots action) and response generation. Typical dialogue system for IPA purposes undertakes straightforward customer support requests (e.g., FAQs), allowing human workers to focus on more complicated inquiries. In this thesis, we are addressing two potential Intelligent Process Automation (IPA) applications and employing statistical Natural Language Processing (NLP) methods for their implementation. The first block of this thesis (Chapter 2 – Chapter 4) deals with the development of a conversational agent for IPA purposes within the e-learning domain. As already mentioned, chatbots are one of the central applications for the IPA domain since they can effectively perform time-consuming tasks while communicating in a natural language. Within this thesis, we realized the IPA conversational bot that takes care of routine and time-consuming tasks regularly performed by human tutors of an online mathematical course. This bot is deployed in a real-world setting within the OMB+ mathematical platform. Conducting experiments for this part, we observed two possibilities to build the conversational agent in industrial settings – first, with purely rule-based methods, considering the missing training data and individual aspects of the target domain (i.e., e-learning). Second, we re-implemented two of the main system components (i.e., Natural Language Understanding (NLU) and Dialogue Manager (DM) units) using the current state-of-the-art deep-learning architecture (i.e., Bidirectional Encoder Representations from Transformers (BERT)) and investigated their performance and potential use as a part of a hybrid model (i.e., containing both rule-based and machine learning methods). The second part of the thesis (Chapter 5 – Chapter 6) considers an IPA subproblem within the predictive analytics domain and addresses the task of scientific trend forecasting. Predictive analytics forecasts future outcomes based on historical and current data. Therefore, using the benefits of advanced analytics models, an organization can, for instance, reliably determine trends and emerging topics and then manipulate it while making significant business decisions (i.e., investments). In this work, we dealt with the trend detection task – specifically, we addressed the lack of publicly available benchmarks for evaluating trend detection algorithms. We assembled the benchmark for the detection of both scientific trends and downtrends (i.e., topics that become less frequent overtime). To the best of our knowledge, the task of downtrend detection has not been addressed before. The resulting benchmark is based on a collection of more than one million documents, which is among the largest that has been used for trend detection before, and therefore, offers a realistic setting for the development of trend detection algorithms.Robotergesteuerte Prozessautomatisierung (RPA) ist eine Art von Software-Bots, die manuelle menschliche Tätigkeiten wie die Eingabe von Daten in das System, die Anmeldung in Benutzerkonten oder die Ausführung einfacher, aber sich wiederholender Arbeitsabläufe nachahmt (Mohanty and Vyas, 2018). Einer der Hauptvorteile und gleichzeitig Nachteil der RPA-bots ist jedoch deren Fähigkeit, die gestellte Aufgabe punktgenau zu erfüllen. Einerseits ist ein solches System in der Lage, die Aufgabe akkurat, sorgfältig und schnell auszuführen. Andererseits ist es sehr anfällig für Veränderungen in definierten Szenarien. Da der RPA-Bot für eine bestimmte Aufgabe konzipiert ist, ist es oft nicht möglich, ihn an andere Domänen oder sogar für einfache Änderungen in einem Arbeitsablauf anzupassen (Mohanty and Vyas, 2018). Diese Unfähigkeit, sich an veränderte Bedingungen anzupassen, führte zu einem weiteren Verbesserungsbereich für RPAbots – den Intelligenten Prozessautomatisierungssystemen (IPA). IPA-Bots kombinieren RPA mit Künstlicher Intelligenz (AI) und können komplexe und kognitiv anspruchsvollere Aufgaben erfüllen, die u.A. Schlussfolgerungen und natürliches Sprachverständnis erfordern. Diese Systeme übernehmen zeitaufwändige und routinemäßige Aufgaben, ermöglichen somit einen intelligenten Arbeitsablauf und befreien Fachkräfte für die Durchführung komplizierterer Aufgaben. Bisher wurden die IPA-Techniken hauptsächlich im Bereich der Bildverarbeitung eingesetzt. In der letzten Zeit wurde die natürliche Sprachverarbeitung (NLP) jedoch auch zu einem der potenziellen Anwendungen für IPA, und zwar aufgrund von der Fähigkeit, die menschliche Sprache zu interpretieren. NLP-Methoden werden eingesetzt, um große Mengen an Textdaten zu analysieren und auf verschiedene Anfragen zu reagieren. Auch wenn die verfügbaren Daten unstrukturiert sind oder kein vordefiniertes Format haben (z.B. E-Mails), oder wenn die in einem variablen Format vorliegen (z.B. Rechnungen, juristische Dokumente), dann werden ebenfalls die NLP Techniken angewendet, um die relevanten Informationen zu extrahieren, die dann zur Lösung verschiedener Probleme verwendet werden können. NLP im Rahmen von IPA beschränkt sich jedoch nicht auf die Extraktion relevanter Daten aus Textdokumenten. Eine der zentralen Anwendungen von IPA sind Konversationsagenten, die zur Interaktion zwischen Mensch und Maschine eingesetzt werden. Konversationsagenten erfahren enorme Nachfrage, da sie in der Lage sind, eine große Anzahl von Benutzern gleichzeitig zu unterstützen, und dabei in einer natürlichen Sprache kommunizieren. Die Implementierung eines Chatsystems umfasst verschiedene Arten von NLP-Teilaufgaben, beginnend mit dem Verständnis der natürlichen Sprache (z.B. Absichtserkennung, Extraktion von Entitäten) über das Dialogmanagement (z.B. Festlegung der nächstmöglichen Bot-Aktion) bis hin zur Response-Generierung. Ein typisches Dialogsystem für IPA-Zwecke übernimmt in der Regel unkomplizierte Kundendienstanfragen (z.B. Beantwortung von FAQs), so dass sich die Mitarbeiter auf komplexere Anfragen konzentrieren können. Diese Dissertation umfasst zwei Bereiche, die durch das breitere Thema vereint sind, nämlich die Intelligente Prozessautomatisierung (IPA) unter Verwendung statistischer Methoden der natürlichen Sprachverarbeitung (NLP). Der erste Block dieser Arbeit (Kapitel 2 – Kapitel 4) befasst sich mit der Impementierung eines Konversationsagenten für IPA-Zwecke innerhalb der E-Learning-Domäne. Wie bereits erwähnt, sind Chatbots eine der zentralen Anwendungen für die IPA-Domäne, da sie zeitaufwändige Aufgaben in einer natürlichen Sprache effektiv ausführen können. Der IPA-Kommunikationsbot, der in dieser Arbeit realisiert wurde, kümmert sich ebenfalls um routinemäßige und zeitaufwändige Aufgaben, die sonst von Tutoren in einem Online-Mathematikkurs in deutscher Sprache durchgeführt werden. Dieser Bot ist in der täglichen Anwendung innerhalb der mathematischen Plattform OMB+ eingesetzt. Bei der Durchführung von Experimenten beobachteten wir zwei Möglichkeiten, den Konversationsagenten im industriellen Umfeld zu entwickeln – zunächst mit rein regelbasierten Methoden, unter Bedingungen der fehlenden Trainingsdaten und besonderer Aspekte der Zieldomäne (d.h. E-Learning). Zweitens haben wir zwei der Hauptsystemkomponenten (Sprachverständnismodul, Dialog-Manager) mit dem derzeit fortschrittlichsten Deep Learning Algorithmus reimplementiert und die Performanz dieser Komponenten untersucht. Der zweite Teil der Doktorarbeit (Kapitel 5 – Kapitel 6) betrachtet ein IPA-Problem innerhalb des Vorhersageanalytik-Bereichs. Vorhersageanalytik zielt darauf ab, Prognosen über zukünftige Ergebnisse auf der Grundlage von historischen und aktuellen Daten zu erstellen. Daher kann ein Unternehmen mit Hilfe der Vorhersagesysteme z.B. die Trends oder neu entstehende Themen zuverlässig bestimmen und diese Informationen dann bei wichtigen Geschäftsentscheidungen (z.B. Investitionen) einsetzen. In diesem Teil der Arbeit beschäftigen wir uns mit dem Teilproblem der Trendprognose – insbesondere mit dem Fehlen öffentlich zugänglicher Benchmarks für die Evaluierung von Trenderkennungsalgorithmen. Wir haben den Benchmark zusammengestellt und veröffentlicht, um sowohl Trends als auch Abwärtstrends zu erkennen. Nach unserem besten Wissen ist die Aufgabe der Abwärtstrenderkennung bisher nicht adressiert worden. Der resultierende Benchmark basiert auf einer Sammlung von mehr als einer Million Dokumente, der zu den größten gehört, die bisher für die Trenderkennung verwendet wurden, und somit einen realistischen Rahmen für die Entwicklung von Trenddetektionsalgorithmen bietet

    Research in Linguistic Engineering: Resources and Tools

    Get PDF
    In this paper we are revisiting some of the resources and tools developed by the members of the Intelligent Systems Research Group (GSI) at UPM as well as from the Information Retrieval and Natural Language Processing Research Group (IR&NLP) at UNED. Details about developed resources (corpus, software) and current interests and projects are given for the two groups. It is also included a brief summary and links into open source resources and tools developed by other groups of the MAVIR consortium

    Robust Dialog Management Through A Context-centric Architecture

    Get PDF
    This dissertation presents and evaluates a method of managing spoken dialog interactions with a robust attention to fulfilling the human user’s goals in the presence of speech recognition limitations. Assistive speech-based embodied conversation agents are computer-based entities that interact with humans to help accomplish a certain task or communicate information via spoken input and output. A challenging aspect of this task involves open dialog, where the user is free to converse in an unstructured manner. With this style of input, the machine’s ability to communicate may be hindered by poor reception of utterances, caused by a user’s inadequate command of a language and/or faults in the speech recognition facilities. Since a speech-based input is emphasized, this endeavor involves the fundamental issues associated with natural language processing, automatic speech recognition and dialog system design. Driven by ContextBased Reasoning, the presented dialog manager features a discourse model that implements mixed-initiative conversation with a focus on the user’s assistive needs. The discourse behavior must maintain a sense of generality, where the assistive nature of the system remains constant regardless of its knowledge corpus. The dialog manager was encapsulated into a speech-based embodied conversation agent platform for prototyping and testing purposes. A battery of user trials was performed on this agent to evaluate its performance as a robust, domain-independent, speech-based interaction entity capable of satisfying the needs of its users

    Natural Language Interfaces to Data

    Full text link
    Recent advances in NLU and NLP have resulted in renewed interest in natural language interfaces to data, which provide an easy mechanism for non-technical users to access and query the data. While early systems evolved from keyword search and focused on simple factual queries, the complexity of both the input sentences as well as the generated SQL queries has evolved over time. More recently, there has also been a lot of focus on using conversational interfaces for data analytics, empowering a line of non-technical users with quick insights into the data. There are three main challenges in natural language querying (NLQ): (1) identifying the entities involved in the user utterance, (2) connecting the different entities in a meaningful way over the underlying data source to interpret user intents, and (3) generating a structured query in the form of SQL or SPARQL. There are two main approaches for interpreting a user's NLQ. Rule-based systems make use of semantic indices, ontologies, and KGs to identify the entities in the query, understand the intended relationships between those entities, and utilize grammars to generate the target queries. With the advances in deep learning (DL)-based language models, there have been many text-to-SQL approaches that try to interpret the query holistically using DL models. Hybrid approaches that utilize both rule-based techniques as well as DL models are also emerging by combining the strengths of both approaches. Conversational interfaces are the next natural step to one-shot NLQ by exploiting query context between multiple turns of conversation for disambiguation. In this article, we review the background technologies that are used in natural language interfaces, and survey the different approaches to NLQ. We also describe conversational interfaces for data analytics and discuss several benchmarks used for NLQ research and evaluation.Comment: The full version of this manuscript, as published by Foundations and Trends in Databases, is available at http://dx.doi.org/10.1561/190000007

    In Search of the Perfect Prompt

    Get PDF
    The study investigates the efficacy of soft and hard prompt strategies in the scientific domain, namely in the tasks of conversational abstract generation. The proposed approach incorporates two distinct methods, prompt engineering and prompt tuning, within a Conversational Recommender System (CRS). The primary objective of this system is to aid users in generating abstracts for their research. The present study employs an evaluation approach that integrates user research with objective performance criteria. This study examines the strengths and disadvantages associated with both categories of prompts, commencing with an analysis of existing literature on CRS and prompting studies, and subsequently conducting original research tests. This study makes three primary contributions. Initially, a compilation of prerequisites and hypothetical situations is formed by an examination of the issue at hand. This wish list presents a range of potential technological, user, and functional views that have the potential to contribute to future studies in this area. Furthermore, the examination of user studies is an integral element of our evaluation methodology. During this process, we analyze many factors pertaining to the 6 participants, including their cognitive load, response time, and overall happiness while applying challenging prompts within the CRS. In our investigation, we examine the behavior and needs of the target demographic, consisting of academics and researchers. Our findings suggest a tendency among this group to favor interactions that are focused on factual information and question-and-answer exchanges, as opposed to more expansive and conversational encounters. Thirdly, our study delves into the comprehensibility and relevance of the generated abstracts, utilizing well-established criteria such as Rouge and F1 scores. In our research, the anticipated effect of combining prompts with text-generation tasks is to produce scientific abstracts that are imprecise and broader in nature. However, this objective contradicts the expectations of the users. The research findings shed light on the difficulties and advantages that arise from implementing prompting techniques with a CRS. This study makes a valuable contribution by recognizing the importance of contextual comprehension and employing prompting strategies from both technical and user-centric viewpoints. One of the primary findings is that it is crucial to customize prompt tactics in accordance with user preferences and domain demands. The given findings contribute to the existing body of knowledge on conversational recommender systems and their applications in the field of natural language processing

    Improving Conversational Recommender System via Contextual and Time-Aware Modeling with Less Domain-Specific Knowledge

    Full text link
    Conversational Recommender Systems (CRS) has become an emerging research topic seeking to perform recommendations through interactive conversations, which generally consist of generation and recommendation modules. Prior work on CRS tends to incorporate more external and domain-specific knowledge like item reviews to enhance performance. Despite the fact that the collection and annotation of the external domain-specific information needs much human effort and degenerates the generalizability, too much extra knowledge introduces more difficulty to balance among them. Therefore, we propose to fully discover and extract internal knowledge from the context. We capture both entity-level and contextual-level representations to jointly model user preferences for the recommendation, where a time-aware attention is designed to emphasize the recently appeared items in entity-level representations. We further use the pre-trained BART to initialize the generation module to alleviate the data scarcity and enhance the context modeling. In addition to conducting experiments on a popular dataset (ReDial), we also include a multi-domain dataset (OpenDialKG) to show the effectiveness of our model. Experiments on both datasets show that our model achieves better performance on most evaluation metrics with less external knowledge and generalizes well to other domains. Additional analyses on the recommendation and generation tasks demonstrate the effectiveness of our model in different scenarios

    Natural Language Processing: Emerging Neural Approaches and Applications

    Get PDF
    This Special Issue highlights the most recent research being carried out in the NLP field to discuss relative open issues, with a particular focus on both emerging approaches for language learning, understanding, production, and grounding interactively or autonomously from data in cognitive and neural systems, as well as on their potential or real applications in different domains
    • …
    corecore