64 research outputs found

    Asking intelligent questions: the statistical mechanics of query learning

    Get PDF

    Automatic Table Extension with Open Data

    Get PDF
    With thousands of data sources available on the web as well as within organisations, data scientists increasingly spend more time searching for data than analysing it. To ease the task of find and integrating relevant data for data mining projects, this dissertation presents two new methods for automatic table extension. Automatic table extension systems take over the task of tata discovery and data integration by adding new columns with new information (new attributes) to any table. The data values in the new columns are extracted from a given corpus of tables

    Discovering logical knowledge in non-symbolic domains

    Get PDF
    Deep learning and symbolic artificial intelligence remain the two main paradigms in Artificial Intelligence (AI), each presenting their own strengths and weaknesses. Artificial agents should integrate both of these aspects of AI in order to show general intelligence and solve complex problems in real-world scenarios; similarly to how humans use both the analytical left side and the intuitive right side of their brain in their lives. However, one of the main obstacles hindering this integration is the Symbol Grounding Problem [144], which is the capacity to map physical world observations to a set of symbols. In this thesis, we combine symbolic reasoning and deep learning in order to better represent and reason with abstract knowledge. In particular, we focus on solving non-symbolic-state Reinforcement Learning environments using a symbolic logical domain. We consider different configurations: (i) unknown knowledge of both the symbol grounding function and the symbolic logical domain, (ii) unknown knowledge of the symbol grounding function and prior knowledge of the domain, (iii) imperfect knowledge of the symbols grounding function and unknown knowledge of the domain. We develop algorithms and neural network architectures that are general enough to be applied to different kinds of environments, which we test on both continuous-state control problems and image-based environments. Specifically, we develop two kinds of architectures: one for Markovian RL tasks and one for non-Markovian RL domains. The first is based on model-based RL and representation learning, and is inspired by the substantial prior work in state abstraction for RL [115]. The second is mainly based on recurrent neural networks and continuous relaxations of temporal logic domains. In particular, the first approach extracts a symbolic STRIPS-like abstraction for control problems. For the second approach, we explore connections between recurrent neural networks and finite state machines, and we define Visual Reward Machines, an extension to non-symbolic domains of Reward Machines [27], which are a popular approach to non-Markovian RL tasks

    Evaluation of the usability and usefulness of automatic speech recognition among users in South Africa

    Get PDF
    Includes abstract.Includes bibliographical references.An automatic speech recognition (ASR) system is a software application which recognizes human speech, processes it as input, and displays a text version of the speech as output or uses the input as commands for another application's usage. ASR can either be speaker-dependent or speaker-independent. A speaker-dependent ASR system required every user to perform training before its usage, while speaker-independent ASR requires no prior training before usage...This study involved the evaluation of commercially available English ASR systems, establishing their usability and usefulness among different language groups in South Africa which use English as a common language. Of particular interest was the effect of African accents on the performance of the ASR systems. ASR technology is widely used and researched in the developed world with reported recognition accuracy of up to 99%. However, English spoken with African accents may have adverse effect on the recognition accuracy..

    Statistical natural language processing methods for intelligent process automation

    Get PDF
    Nowadays, digitization is transforming the way businesses work. Recently, Artificial Intelligence (AI) techniques became an essential part of the automation of business processes: In addition to cost advantages, these techniques offer fast processing times and higher customer satisfaction rates, thus ultimately increasing sales. One of the intelligent approaches for accelerating digital transformation in companies is the Robotic Process Automation (RPA). An RPA-system is a software tool that robotizes routine and time-consuming responsibilities such as email assessment, various calculations, or creation of documents and reports (Mohanty and Vyas, 2018). Its main objective is to organize a smart workflow and therethrough to assist employees by offering them more scope for cognitively demanding and engaging work. Intelligent Process Automation (IPA) offers all these advantages as well; however, it goes beyond the RPA by adding AI components such as Machine- and Deep Learning techniques to conventional automation solutions. Previously, IPA approaches were primarily employed within the computer vision domain. However, in recent times, Natural Language Processing (NLP) became one of the potential applications for IPA as well due to its ability to understand and interpret human language. Usually, NLP methods are used to analyze large amounts of unstructured textual data and to respond to various inquiries. However, one of the central applications of NLP within the IPA domain – are conversational interfaces (e.g., chatbots, virtual agents) that are used to enable human-to-machine communication. Nowadays, conversational agents gain enormous demand due to their ability to support a large number of users simultaneously while communicating in a natural language. The implementation of a conversational agent comprises multiple stages and involves diverse types of NLP sub-tasks, starting with natural language understanding (e.g., intent recognition, named entity extraction) and going towards dialogue management (i.e., determining the next possible bots action) and response generation. Typical dialogue system for IPA purposes undertakes straightforward customer support requests (e.g., FAQs), allowing human workers to focus on more complicated inquiries. In this thesis, we are addressing two potential Intelligent Process Automation (IPA) applications and employing statistical Natural Language Processing (NLP) methods for their implementation. The first block of this thesis (Chapter 2 – Chapter 4) deals with the development of a conversational agent for IPA purposes within the e-learning domain. As already mentioned, chatbots are one of the central applications for the IPA domain since they can effectively perform time-consuming tasks while communicating in a natural language. Within this thesis, we realized the IPA conversational bot that takes care of routine and time-consuming tasks regularly performed by human tutors of an online mathematical course. This bot is deployed in a real-world setting within the OMB+ mathematical platform. Conducting experiments for this part, we observed two possibilities to build the conversational agent in industrial settings – first, with purely rule-based methods, considering the missing training data and individual aspects of the target domain (i.e., e-learning). Second, we re-implemented two of the main system components (i.e., Natural Language Understanding (NLU) and Dialogue Manager (DM) units) using the current state-of-the-art deep-learning architecture (i.e., Bidirectional Encoder Representations from Transformers (BERT)) and investigated their performance and potential use as a part of a hybrid model (i.e., containing both rule-based and machine learning methods). The second part of the thesis (Chapter 5 – Chapter 6) considers an IPA subproblem within the predictive analytics domain and addresses the task of scientific trend forecasting. Predictive analytics forecasts future outcomes based on historical and current data. Therefore, using the benefits of advanced analytics models, an organization can, for instance, reliably determine trends and emerging topics and then manipulate it while making significant business decisions (i.e., investments). In this work, we dealt with the trend detection task – specifically, we addressed the lack of publicly available benchmarks for evaluating trend detection algorithms. We assembled the benchmark for the detection of both scientific trends and downtrends (i.e., topics that become less frequent overtime). To the best of our knowledge, the task of downtrend detection has not been addressed before. The resulting benchmark is based on a collection of more than one million documents, which is among the largest that has been used for trend detection before, and therefore, offers a realistic setting for the development of trend detection algorithms.Robotergesteuerte Prozessautomatisierung (RPA) ist eine Art von Software-Bots, die manuelle menschliche Tätigkeiten wie die Eingabe von Daten in das System, die Anmeldung in Benutzerkonten oder die Ausführung einfacher, aber sich wiederholender Arbeitsabläufe nachahmt (Mohanty and Vyas, 2018). Einer der Hauptvorteile und gleichzeitig Nachteil der RPA-bots ist jedoch deren Fähigkeit, die gestellte Aufgabe punktgenau zu erfüllen. Einerseits ist ein solches System in der Lage, die Aufgabe akkurat, sorgfältig und schnell auszuführen. Andererseits ist es sehr anfällig für Veränderungen in definierten Szenarien. Da der RPA-Bot für eine bestimmte Aufgabe konzipiert ist, ist es oft nicht möglich, ihn an andere Domänen oder sogar für einfache Änderungen in einem Arbeitsablauf anzupassen (Mohanty and Vyas, 2018). Diese Unfähigkeit, sich an veränderte Bedingungen anzupassen, führte zu einem weiteren Verbesserungsbereich für RPAbots – den Intelligenten Prozessautomatisierungssystemen (IPA). IPA-Bots kombinieren RPA mit Künstlicher Intelligenz (AI) und können komplexe und kognitiv anspruchsvollere Aufgaben erfüllen, die u.A. Schlussfolgerungen und natürliches Sprachverständnis erfordern. Diese Systeme übernehmen zeitaufwändige und routinemäßige Aufgaben, ermöglichen somit einen intelligenten Arbeitsablauf und befreien Fachkräfte für die Durchführung komplizierterer Aufgaben. Bisher wurden die IPA-Techniken hauptsächlich im Bereich der Bildverarbeitung eingesetzt. In der letzten Zeit wurde die natürliche Sprachverarbeitung (NLP) jedoch auch zu einem der potenziellen Anwendungen für IPA, und zwar aufgrund von der Fähigkeit, die menschliche Sprache zu interpretieren. NLP-Methoden werden eingesetzt, um große Mengen an Textdaten zu analysieren und auf verschiedene Anfragen zu reagieren. Auch wenn die verfügbaren Daten unstrukturiert sind oder kein vordefiniertes Format haben (z.B. E-Mails), oder wenn die in einem variablen Format vorliegen (z.B. Rechnungen, juristische Dokumente), dann werden ebenfalls die NLP Techniken angewendet, um die relevanten Informationen zu extrahieren, die dann zur Lösung verschiedener Probleme verwendet werden können. NLP im Rahmen von IPA beschränkt sich jedoch nicht auf die Extraktion relevanter Daten aus Textdokumenten. Eine der zentralen Anwendungen von IPA sind Konversationsagenten, die zur Interaktion zwischen Mensch und Maschine eingesetzt werden. Konversationsagenten erfahren enorme Nachfrage, da sie in der Lage sind, eine große Anzahl von Benutzern gleichzeitig zu unterstützen, und dabei in einer natürlichen Sprache kommunizieren. Die Implementierung eines Chatsystems umfasst verschiedene Arten von NLP-Teilaufgaben, beginnend mit dem Verständnis der natürlichen Sprache (z.B. Absichtserkennung, Extraktion von Entitäten) über das Dialogmanagement (z.B. Festlegung der nächstmöglichen Bot-Aktion) bis hin zur Response-Generierung. Ein typisches Dialogsystem für IPA-Zwecke übernimmt in der Regel unkomplizierte Kundendienstanfragen (z.B. Beantwortung von FAQs), so dass sich die Mitarbeiter auf komplexere Anfragen konzentrieren können. Diese Dissertation umfasst zwei Bereiche, die durch das breitere Thema vereint sind, nämlich die Intelligente Prozessautomatisierung (IPA) unter Verwendung statistischer Methoden der natürlichen Sprachverarbeitung (NLP). Der erste Block dieser Arbeit (Kapitel 2 – Kapitel 4) befasst sich mit der Impementierung eines Konversationsagenten für IPA-Zwecke innerhalb der E-Learning-Domäne. Wie bereits erwähnt, sind Chatbots eine der zentralen Anwendungen für die IPA-Domäne, da sie zeitaufwändige Aufgaben in einer natürlichen Sprache effektiv ausführen können. Der IPA-Kommunikationsbot, der in dieser Arbeit realisiert wurde, kümmert sich ebenfalls um routinemäßige und zeitaufwändige Aufgaben, die sonst von Tutoren in einem Online-Mathematikkurs in deutscher Sprache durchgeführt werden. Dieser Bot ist in der täglichen Anwendung innerhalb der mathematischen Plattform OMB+ eingesetzt. Bei der Durchführung von Experimenten beobachteten wir zwei Möglichkeiten, den Konversationsagenten im industriellen Umfeld zu entwickeln – zunächst mit rein regelbasierten Methoden, unter Bedingungen der fehlenden Trainingsdaten und besonderer Aspekte der Zieldomäne (d.h. E-Learning). Zweitens haben wir zwei der Hauptsystemkomponenten (Sprachverständnismodul, Dialog-Manager) mit dem derzeit fortschrittlichsten Deep Learning Algorithmus reimplementiert und die Performanz dieser Komponenten untersucht. Der zweite Teil der Doktorarbeit (Kapitel 5 – Kapitel 6) betrachtet ein IPA-Problem innerhalb des Vorhersageanalytik-Bereichs. Vorhersageanalytik zielt darauf ab, Prognosen über zukünftige Ergebnisse auf der Grundlage von historischen und aktuellen Daten zu erstellen. Daher kann ein Unternehmen mit Hilfe der Vorhersagesysteme z.B. die Trends oder neu entstehende Themen zuverlässig bestimmen und diese Informationen dann bei wichtigen Geschäftsentscheidungen (z.B. Investitionen) einsetzen. In diesem Teil der Arbeit beschäftigen wir uns mit dem Teilproblem der Trendprognose – insbesondere mit dem Fehlen öffentlich zugänglicher Benchmarks für die Evaluierung von Trenderkennungsalgorithmen. Wir haben den Benchmark zusammengestellt und veröffentlicht, um sowohl Trends als auch Abwärtstrends zu erkennen. Nach unserem besten Wissen ist die Aufgabe der Abwärtstrenderkennung bisher nicht adressiert worden. Der resultierende Benchmark basiert auf einer Sammlung von mehr als einer Million Dokumente, der zu den größten gehört, die bisher für die Trenderkennung verwendet wurden, und somit einen realistischen Rahmen für die Entwicklung von Trenddetektionsalgorithmen bietet

    딥러닝 기반 생성 모델을 이용한 자연어처리 데이터 증강 기법

    Get PDF
    학위논문(박사)--서울대학교 대학원 :공과대학 컴퓨터공학부,2020. 2. 이상구.Recent advances in generation capability of deep learning models have spurred interest in utilizing deep generative models for unsupervised generative data augmentation (GDA). Generative data augmentation aims to improve the performance of a downstream machine learning model by augmenting the original dataset with samples generated from a deep latent variable model. This data augmentation approach is attractive to the natural language processing community, because (1) there is a shortage of text augmentation techniques that require little supervision and (2) resource scarcity being prevalent. In this dissertation, we explore the feasibility of exploiting deep latent variable models for data augmentation on three NLP tasks: sentence classification, spoken language understanding (SLU) and dialogue state tracking (DST), represent NLP tasks of various complexities and properties -- SLU requires multi-task learning of text classification and sequence tagging, while DST requires the understanding of hierarchical and recurrent data structures. For each of the three tasks, we propose a task-specific latent variable model based on conditional, hierarchical and sequential variational autoencoders (VAE) for multi-modal joint modeling of linguistic features and the relevant annotations. We conduct extensive experiments to statistically justify our hypothesis that deep generative data augmentation is beneficial for all subject tasks. Our experiments show that deep generative data augmentation is effective for the select tasks, supporting the idea that the technique can potentially be utilized for other range of NLP tasks. Ablation and qualitative studies reveal deeper insight into the underlying mechanisms of generative data augmentation. As a secondary contribution, we also shed light onto the recurring posterior collapse phenomenon in autoregressive VAEs and, subsequently, propose novel techniques to reduce the model risk, which is crucial for proper training of complex VAE models, enabling them to synthesize better samples for data augmentation. In summary, this work intends to demonstrate and analyze the effectiveness of unsupervised generative data augmentation in NLP. Ultimately, our approach enables standardized adoption of generative data augmentation, which can be applied orthogonally to existing regularization techniques.최근 딥러닝 기반 생성 모델의 급격한 발전으로 이를 이용한 생성 기반 데이터 증강 기법(generative data augmentation, GDA)의 실현 가능성에 대한 기대가 커지고 있다. 생성 기반 데이터 증강 기법은 딥러닝 기반 잠재변수 모델에서 생성 된 샘플을 원본 데이터셋에 추가하여 연관된 태스크의 성능을 향상시키는 기술을 의미한다. 따라서 생성 기반 데이터 증강 기법은 데이터 공간에서 이뤄지는 정규화 기술의 한 형태로 간주될 수 있다. 이러한 딥러닝 기반 생성 모델의 새로운 활용 가능성은 자연어처리 분야에서 더욱 중요하게 부각되는 이유는 (1) 범용 가능한 텍스트 데이터 증강 기술의 부재와 (2) 텍스트 데이터의 희소성을 극복할 수 있는 대안이 필요하기 때문이다. 문제의 복잡도와 특징을 골고루 채집하기 위해 본 논문에서는 텍스트 분류(text classification), 순차적 레이블링과 멀티태스킹 기술이 필요한 발화 이해(spoken language understanding, SLU), 계층적이며 재귀적인 데이터 구조에 대한 고려가 필요한 대화 상태 추적(dialogue state tracking, DST) 등 세 가지 문제에서 딥러닝 기반 생성 모델을 활용한 데이터 증강 기법의 타당성에 대해 다룬다. 본 연구에서는 조건부, 계층적 및 순차적 variational autoencoder (VAE)에 기반하여 각 자연어처리 문제에 특화된 텍스트 및 연관 부착 정보를 동시에 생성하는 특수 딥러닝 생성 모델들을 제시하고, 다양한 하류 모델과 데이터셋을 다루는 등 폭 넓은 실험을 통해 딥 생성 모델 기반 데이터 증강 기법의 효과를 통계적으로 입증하였다. 부수적 연구에서는 자기회귀적(autoregressive) VAE에서 빈번히 발생하는 posterior collapse 문제에 대해 탐구하고, 해당 문제를 완화할 수 있는 신규 방안도 제안한다. 해당 방법을 생성적 데이터 증강에 필요한 복잡한 VAE 모델에 적용하였을 때, 생성 모델의 생성 질이 향상되어 데이터 증강 효과에도 긍정적인 영향을 미칠 수 있음을 검증하였다. 본 논문을 통해 자연어처리 분야에서 기존 정규화 기법과 병행 적용 가능한 비지도 형태의 데이터 증강 기법의 표준화를 기대해 볼 수 있다.1 Introduction 1 1.1 Motivation 1 1.2 Dissertation Overview 6 2 Background and Related Work 8 2.1 Deep Latent Variable Models 8 2.1.1 Variational Autoencoder (VAE) 10 2.1.2 Deep Generative Models and Text Generation 12 2.2 Data Augmentation 12 2.2.1 General Description 13 2.2.2 Categorization of Data Augmentation 14 2.2.3 Theoretical Explanations 21 2.3 Summary 24 3 Basic Task: Text Classi cation 25 3.1 Introduction 25 3.2 Our Approach 28 3.2.1 Proposed Models 28 3.2.2 Training with I-VAE 29 3.3 Experiments 31 3.3.1 Datasets 32 3.3.2 Experimental Settings 33 3.3.3 Implementation Details 34 3.3.4 Data Augmentation Results 36 3.3.5 Ablation Studies 39 3.3.6 Qualitative Analysis 40 3.4 Summary 45 4 Multi-task Learning: Spoken Language Understanding 46 4.1 Introduction 46 4.2 Related Work 48 4.3 Model Description 48 4.3.1 Framework Formulation 48 4.3.2 Joint Generative Model 49 4.4 Experiments 56 4.4.1 Datasets 56 4.4.2 Experimental Settings 57 4.4.3 Generative Data Augmentation Results 61 4.4.4 Comparison to Other State-of-the-art Results 63 4.4.5 Ablation Studies 63 4.5 Summary 67 5 Complex Data: Dialogue State Tracking 68 5.1 Introduction 68 5.2 Background and Related Work 70 5.2.1 Task-oriented Dialogue 70 5.2.2 Dialogue State Tracking 72 5.2.3 Conversation Modeling 72 5.3 Variational Hierarchical Dialogue Autoencoder (VHDA) 73 5.3.1 Notations 73 5.3.2 Variational Hierarchical Conversational RNN 74 5.3.3 Proposed Model 75 5.3.4 Posterior Collapse 82 5.4 Experimental Results 84 5.4.1 Experimental Settings 84 5.4.2 Data Augmentation Results 90 5.4.3 Intrinsic Evaluation - Language Evaluation 94 5.4.4 Qualitative Results 95 5.5 Summary 101 6 Conclusion 103 6.1 Summary 103 6.2 Limitations 104 6.3 Future Work 105Docto

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Robust Deep Learning Frameworks for Recognizing and Localizing Objects Accurately and Reliably

    Get PDF
    Detection is an important task in computer vision. It requires to recognize targets inside images, and localize them. The images can be 2D or 3D, and can be represented by dense pixels or sparse point clouds. With recent emergence and development of deep neural networks, many deep learning based detection frameworks have been proposed. They provide promising performance for many targets, e.g. natural objects, object parts, pedestrians and faces, thus are widely used in many applications, including surveillance, autonomous driving and medical image analysis. However, robust object detection is still challenging. Ideal detectors should be able to handle objects with unknown occluders, different scales/movements, long-tailed difficult objects, and low-contrast radiology inputs. Recent detectors are not designed with deliberate consideration of those challenges, and may have degraded performance. In this dissertation, we investigate those challenges, and propose novel detection frameworks to mitigate them. The aforementioned challenges are addressed in different aspects. (i) We address occlusion by proposing end-to-end voting mechanisms for vehicle part detection. It detects targets by accumulating cues relevant to the target. Occlusions eliminate some of the cues, but remaining cues are still able to detect the targets. (ii) We combine semantic segmentation with object detection, to enrich the detection features in multi-layer single-stage detectors. The enriched features capture both low-level details and high-level semantics, thus the quality of detection is significantly improved for both small and large objects due to stronger detection features. (iii) We investigate the issue of long-tailed hard examples and propose a hard image mining strategy. It dynamically identifies hard images and puts more training efforts during the training phase. This leads to models robust to long-tailed hard examples. (iv) For low-contrast multi-slice medical images, we design hybrid detectors to combine 2D and 3D information. Based on a stack of 2D CNNs for each image slice, we design 3D fusion modules to bridge context information from different 2D CNNs. (v) For objects moving in sequences, we design temporal region proposals to model the movements and interactions of them. We model the moving objects with spatial-temporal-interactive features for detecting them through past, current and future
    corecore