106 research outputs found

    SYNTHNOTES: TOWARDS SYNTHETIC CLINICAL TEXT GENERATION

    Get PDF
    SynthNotes is a statistical natural language generation tool for the creation of realistic medical text notes for use by researchers in clinical language processing. Currently, advancements in medical analytics research face barriers due to patient privacy concerns which limits the numbers of researchers who have access to valuable data. Furthermore, privacy protections restrict the computing environments where data can be processed. This often adds prohibitive costs to researchers. The generation method described here provides domain-independent statistical methods for learning to generate text by extracting and ranking templates from a training corpus. The primary contribution in this work is automating the process of template selection and generation of text through classic machine learning methods. SynthNotes removes the need for human domain experts to construct templates, which can be time intensive and expensive. Furthermore, by using machine learning methods, this approach leads to greater realism and variability in the generated notes than could be achieved through classical language generation methods

    Statistical natural language processing methods for intelligent process automation

    Get PDF
    Nowadays, digitization is transforming the way businesses work. Recently, Artificial Intelligence (AI) techniques became an essential part of the automation of business processes: In addition to cost advantages, these techniques offer fast processing times and higher customer satisfaction rates, thus ultimately increasing sales. One of the intelligent approaches for accelerating digital transformation in companies is the Robotic Process Automation (RPA). An RPA-system is a software tool that robotizes routine and time-consuming responsibilities such as email assessment, various calculations, or creation of documents and reports (Mohanty and Vyas, 2018). Its main objective is to organize a smart workflow and therethrough to assist employees by offering them more scope for cognitively demanding and engaging work. Intelligent Process Automation (IPA) offers all these advantages as well; however, it goes beyond the RPA by adding AI components such as Machine- and Deep Learning techniques to conventional automation solutions. Previously, IPA approaches were primarily employed within the computer vision domain. However, in recent times, Natural Language Processing (NLP) became one of the potential applications for IPA as well due to its ability to understand and interpret human language. Usually, NLP methods are used to analyze large amounts of unstructured textual data and to respond to various inquiries. However, one of the central applications of NLP within the IPA domain – are conversational interfaces (e.g., chatbots, virtual agents) that are used to enable human-to-machine communication. Nowadays, conversational agents gain enormous demand due to their ability to support a large number of users simultaneously while communicating in a natural language. The implementation of a conversational agent comprises multiple stages and involves diverse types of NLP sub-tasks, starting with natural language understanding (e.g., intent recognition, named entity extraction) and going towards dialogue management (i.e., determining the next possible bots action) and response generation. Typical dialogue system for IPA purposes undertakes straightforward customer support requests (e.g., FAQs), allowing human workers to focus on more complicated inquiries. In this thesis, we are addressing two potential Intelligent Process Automation (IPA) applications and employing statistical Natural Language Processing (NLP) methods for their implementation. The first block of this thesis (Chapter 2 – Chapter 4) deals with the development of a conversational agent for IPA purposes within the e-learning domain. As already mentioned, chatbots are one of the central applications for the IPA domain since they can effectively perform time-consuming tasks while communicating in a natural language. Within this thesis, we realized the IPA conversational bot that takes care of routine and time-consuming tasks regularly performed by human tutors of an online mathematical course. This bot is deployed in a real-world setting within the OMB+ mathematical platform. Conducting experiments for this part, we observed two possibilities to build the conversational agent in industrial settings – first, with purely rule-based methods, considering the missing training data and individual aspects of the target domain (i.e., e-learning). Second, we re-implemented two of the main system components (i.e., Natural Language Understanding (NLU) and Dialogue Manager (DM) units) using the current state-of-the-art deep-learning architecture (i.e., Bidirectional Encoder Representations from Transformers (BERT)) and investigated their performance and potential use as a part of a hybrid model (i.e., containing both rule-based and machine learning methods). The second part of the thesis (Chapter 5 – Chapter 6) considers an IPA subproblem within the predictive analytics domain and addresses the task of scientific trend forecasting. Predictive analytics forecasts future outcomes based on historical and current data. Therefore, using the benefits of advanced analytics models, an organization can, for instance, reliably determine trends and emerging topics and then manipulate it while making significant business decisions (i.e., investments). In this work, we dealt with the trend detection task – specifically, we addressed the lack of publicly available benchmarks for evaluating trend detection algorithms. We assembled the benchmark for the detection of both scientific trends and downtrends (i.e., topics that become less frequent overtime). To the best of our knowledge, the task of downtrend detection has not been addressed before. The resulting benchmark is based on a collection of more than one million documents, which is among the largest that has been used for trend detection before, and therefore, offers a realistic setting for the development of trend detection algorithms.Robotergesteuerte Prozessautomatisierung (RPA) ist eine Art von Software-Bots, die manuelle menschliche TĂ€tigkeiten wie die Eingabe von Daten in das System, die Anmeldung in Benutzerkonten oder die AusfĂŒhrung einfacher, aber sich wiederholender ArbeitsablĂ€ufe nachahmt (Mohanty and Vyas, 2018). Einer der Hauptvorteile und gleichzeitig Nachteil der RPA-bots ist jedoch deren FĂ€higkeit, die gestellte Aufgabe punktgenau zu erfĂŒllen. Einerseits ist ein solches System in der Lage, die Aufgabe akkurat, sorgfĂ€ltig und schnell auszufĂŒhren. Andererseits ist es sehr anfĂ€llig fĂŒr VerĂ€nderungen in definierten Szenarien. Da der RPA-Bot fĂŒr eine bestimmte Aufgabe konzipiert ist, ist es oft nicht möglich, ihn an andere DomĂ€nen oder sogar fĂŒr einfache Änderungen in einem Arbeitsablauf anzupassen (Mohanty and Vyas, 2018). Diese UnfĂ€higkeit, sich an verĂ€nderte Bedingungen anzupassen, fĂŒhrte zu einem weiteren Verbesserungsbereich fĂŒr RPAbots – den Intelligenten Prozessautomatisierungssystemen (IPA). IPA-Bots kombinieren RPA mit KĂŒnstlicher Intelligenz (AI) und können komplexe und kognitiv anspruchsvollere Aufgaben erfĂŒllen, die u.A. Schlussfolgerungen und natĂŒrliches SprachverstĂ€ndnis erfordern. Diese Systeme ĂŒbernehmen zeitaufwĂ€ndige und routinemĂ€ĂŸige Aufgaben, ermöglichen somit einen intelligenten Arbeitsablauf und befreien FachkrĂ€fte fĂŒr die DurchfĂŒhrung komplizierterer Aufgaben. Bisher wurden die IPA-Techniken hauptsĂ€chlich im Bereich der Bildverarbeitung eingesetzt. In der letzten Zeit wurde die natĂŒrliche Sprachverarbeitung (NLP) jedoch auch zu einem der potenziellen Anwendungen fĂŒr IPA, und zwar aufgrund von der FĂ€higkeit, die menschliche Sprache zu interpretieren. NLP-Methoden werden eingesetzt, um große Mengen an Textdaten zu analysieren und auf verschiedene Anfragen zu reagieren. Auch wenn die verfĂŒgbaren Daten unstrukturiert sind oder kein vordefiniertes Format haben (z.B. E-Mails), oder wenn die in einem variablen Format vorliegen (z.B. Rechnungen, juristische Dokumente), dann werden ebenfalls die NLP Techniken angewendet, um die relevanten Informationen zu extrahieren, die dann zur Lösung verschiedener Probleme verwendet werden können. NLP im Rahmen von IPA beschrĂ€nkt sich jedoch nicht auf die Extraktion relevanter Daten aus Textdokumenten. Eine der zentralen Anwendungen von IPA sind Konversationsagenten, die zur Interaktion zwischen Mensch und Maschine eingesetzt werden. Konversationsagenten erfahren enorme Nachfrage, da sie in der Lage sind, eine große Anzahl von Benutzern gleichzeitig zu unterstĂŒtzen, und dabei in einer natĂŒrlichen Sprache kommunizieren. Die Implementierung eines Chatsystems umfasst verschiedene Arten von NLP-Teilaufgaben, beginnend mit dem VerstĂ€ndnis der natĂŒrlichen Sprache (z.B. Absichtserkennung, Extraktion von EntitĂ€ten) ĂŒber das Dialogmanagement (z.B. Festlegung der nĂ€chstmöglichen Bot-Aktion) bis hin zur Response-Generierung. Ein typisches Dialogsystem fĂŒr IPA-Zwecke ĂŒbernimmt in der Regel unkomplizierte Kundendienstanfragen (z.B. Beantwortung von FAQs), so dass sich die Mitarbeiter auf komplexere Anfragen konzentrieren können. Diese Dissertation umfasst zwei Bereiche, die durch das breitere Thema vereint sind, nĂ€mlich die Intelligente Prozessautomatisierung (IPA) unter Verwendung statistischer Methoden der natĂŒrlichen Sprachverarbeitung (NLP). Der erste Block dieser Arbeit (Kapitel 2 – Kapitel 4) befasst sich mit der Impementierung eines Konversationsagenten fĂŒr IPA-Zwecke innerhalb der E-Learning-DomĂ€ne. Wie bereits erwĂ€hnt, sind Chatbots eine der zentralen Anwendungen fĂŒr die IPA-DomĂ€ne, da sie zeitaufwĂ€ndige Aufgaben in einer natĂŒrlichen Sprache effektiv ausfĂŒhren können. Der IPA-Kommunikationsbot, der in dieser Arbeit realisiert wurde, kĂŒmmert sich ebenfalls um routinemĂ€ĂŸige und zeitaufwĂ€ndige Aufgaben, die sonst von Tutoren in einem Online-Mathematikkurs in deutscher Sprache durchgefĂŒhrt werden. Dieser Bot ist in der tĂ€glichen Anwendung innerhalb der mathematischen Plattform OMB+ eingesetzt. Bei der DurchfĂŒhrung von Experimenten beobachteten wir zwei Möglichkeiten, den Konversationsagenten im industriellen Umfeld zu entwickeln – zunĂ€chst mit rein regelbasierten Methoden, unter Bedingungen der fehlenden Trainingsdaten und besonderer Aspekte der ZieldomĂ€ne (d.h. E-Learning). Zweitens haben wir zwei der Hauptsystemkomponenten (SprachverstĂ€ndnismodul, Dialog-Manager) mit dem derzeit fortschrittlichsten Deep Learning Algorithmus reimplementiert und die Performanz dieser Komponenten untersucht. Der zweite Teil der Doktorarbeit (Kapitel 5 – Kapitel 6) betrachtet ein IPA-Problem innerhalb des Vorhersageanalytik-Bereichs. Vorhersageanalytik zielt darauf ab, Prognosen ĂŒber zukĂŒnftige Ergebnisse auf der Grundlage von historischen und aktuellen Daten zu erstellen. Daher kann ein Unternehmen mit Hilfe der Vorhersagesysteme z.B. die Trends oder neu entstehende Themen zuverlĂ€ssig bestimmen und diese Informationen dann bei wichtigen GeschĂ€ftsentscheidungen (z.B. Investitionen) einsetzen. In diesem Teil der Arbeit beschĂ€ftigen wir uns mit dem Teilproblem der Trendprognose – insbesondere mit dem Fehlen öffentlich zugĂ€nglicher Benchmarks fĂŒr die Evaluierung von Trenderkennungsalgorithmen. Wir haben den Benchmark zusammengestellt und veröffentlicht, um sowohl Trends als auch AbwĂ€rtstrends zu erkennen. Nach unserem besten Wissen ist die Aufgabe der AbwĂ€rtstrenderkennung bisher nicht adressiert worden. Der resultierende Benchmark basiert auf einer Sammlung von mehr als einer Million Dokumente, der zu den grĂ¶ĂŸten gehört, die bisher fĂŒr die Trenderkennung verwendet wurden, und somit einen realistischen Rahmen fĂŒr die Entwicklung von Trenddetektionsalgorithmen bietet

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Exploring Flexibility in Natural Language Generation Through Discursive Analysis of New Textual Genres

    Get PDF
    Since automatic language generation is a task able to enrich applications rooted in most of the language-related areas, from machine translation to interactive dialogue, it seems worthwhile to undertake a strategy focused on enhancing generation system’s adaptability and flexibility. It is our first objective to understand the relation between the factors that contribute to discourse articulation in order to devise the techniques that will generate it. From that point, we want to determine the appropriate methods to automatically learn those factors. The role of genre on this approach remains essential as provider of the stable forms that are required in the discourse to meet certain communicative goals. The arising of new web-based genres and the accessibility of the data due to its digital nature, has prompted us to use reviews in our first attempt to learn the characteristics of their singular non-rigid structure. The process and the preliminary results are explained in the present paper.This work has been supported by the grant ACIF/2016/501 from the Generalitat Valenciana. Funds have been also received from the University of Alicante, Spanish Government and the European Commission through the projects “ExplotaciĂłn y tratamiento de la informaciĂłn disponible en Internet para la anotaciĂłn y generaciĂłn de textos adaptados al usuario” (GRE13-15) and “DIIM2.0: Desarrollo de tĂ©cnicas Inteligentes e Interactivas de MinerĂ­a y generaciĂłn de informaciĂłn sobre la web 2.0” (PROMETEOII/2014/001), TIN2015-65100-R, TIN2015-65136-C2-2-R, and SAM (FP7-611312), respectively

    Backdoor Attacks and Countermeasures in Natural Language Processing Models: A Comprehensive Security Review

    Full text link
    Deep Neural Networks (DNNs) have led to unprecedented progress in various natural language processing (NLP) tasks. Owing to limited data and computation resources, using third-party data and models has become a new paradigm for adapting various tasks. However, research shows that it has some potential security vulnerabilities because attackers can manipulate the training process and data source. Such a way can set specific triggers, making the model exhibit expected behaviors that have little inferior influence on the model's performance for primitive tasks, called backdoor attacks. Hence, it could have dire consequences, especially considering that the backdoor attack surfaces are broad. To get a precise grasp and understanding of this problem, a systematic and comprehensive review is required to confront various security challenges from different phases and attack purposes. Additionally, there is a dearth of analysis and comparison of the various emerging backdoor countermeasures in this situation. In this paper, we conduct a timely review of backdoor attacks and countermeasures to sound the red alarm for the NLP security community. According to the affected stage of the machine learning pipeline, the attack surfaces are recognized to be wide and then formalized into three categorizations: attacking pre-trained model with fine-tuning (APMF) or prompt-tuning (APMP), and attacking final model with training (AFMT), where AFMT can be subdivided into different attack aims. Thus, attacks under each categorization are combed. The countermeasures are categorized into two general classes: sample inspection and model inspection. Overall, the research on the defense side is far behind the attack side, and there is no single defense that can prevent all types of backdoor attacks. An attacker can intelligently bypass existing defenses with a more invisible attack. ......Comment: 24 pages, 4 figure

    A Domain-Adaptable Heterogeneous Information Integration Platform: Tourism and Biomedicine Domains.

    Get PDF
    In recent years, information integration systems have become very popular in mashup-type applications. Information sources are normally presented in an individual and unrelated fashion, and the development of new technologies to reduce the negative effects of information dispersion is needed. A major challenge is the integration and implementation of processing pipelines using different technologies promoting the emergence of advanced architectures capable of processing such a number of diverse sources. This paper describes a semantic domain-adaptable platform to integrate those sources and provide high-level functionalities, such as recommendations, shallow and deep natural language processing, text enrichment, and ontology standardization. Our proposed intelligent domain-adaptable platform (IDAP) has been implemented and tested in the tourism and biomedicine domains to demonstrate the adaptability, flexibility, modularity, and utility of the platform. Questionnaires, performance metrics, and A/B control groups’ evaluations have shown improvements when using IDAP in learning environmentspost-print2139 K
    • 

    corecore