76,576 research outputs found

    Deep Learning for Hindi Text Classification: A Comparison

    Full text link
    Natural Language Processing (NLP) and especially natural language text analysis have seen great advances in recent times. Usage of deep learning in text processing has revolutionized the techniques for text processing and achieved remarkable results. Different deep learning architectures like CNN, LSTM, and very recent Transformer have been used to achieve state of the art results variety on NLP tasks. In this work, we survey a host of deep learning architectures for text classification tasks. The work is specifically concerned with the classification of Hindi text. The research in the classification of morphologically rich and low resource Hindi language written in Devanagari script has been limited due to the absence of large labeled corpus. In this work, we used translated versions of English data-sets to evaluate models based on CNN, LSTM and Attention. Multilingual pre-trained sentence embeddings based on BERT and LASER are also compared to evaluate their effectiveness for the Hindi language. The paper also serves as a tutorial for popular text classification techniques.Comment: Accepted at International Conference on Intelligent Human Computer Interaction(IHCI) 201

    The Beetle and BeeDiff Tutoring Systems

    Get PDF
    We describe two tutorial dialogue systems that adapt techniques from task-oriented dialogue systems to tutorial dialogue. Both systems employ the same reusable deep natural language understanding and generation components to interpret students ' written utterances and to automatically generate adaptive tutorial responses, with separate domain reasoners to provide the necessary knowledge about the correctness of student answers and hinting strategies. We focus on integrating the domain-independent language processing components with domain-specific reasoning and tutorial components in order to improve the dialogue interaction, and present a preliminary analysis of BeeDiff's evaluation

    Reinforcement Learning and Bandits for Speech and Language Processing: Tutorial, Review and Outlook

    Full text link
    In recent years, reinforcement learning and bandits have transformed a wide range of real-world applications including healthcare, finance, recommendation systems, robotics, and last but not least, the speech and natural language processing. While most speech and language applications of reinforcement learning algorithms are centered around improving the training of deep neural networks with its flexible optimization properties, there are still many grounds to explore to utilize the benefits of reinforcement learning, such as its reward-driven adaptability, state representations, temporal structures and generalizability. In this survey, we present an overview of recent advancements of reinforcement learning and bandits, and discuss how they can be effectively employed to solve speech and natural language processing problems with models that are adaptive, interactive and scalable.Comment: To appear in Expert Systems with Applications. Accompanying INTERSPEECH 2022 Tutorial on the same topic. Including latest advancements in large language models (LLMs

    Actuarial Applications of Natural Language Processing Using Transformers: Case Studies for Using Text Features in an Actuarial Context

    Full text link
    This tutorial demonstrates workflows to incorporate text data into actuarial classification and regression tasks. The main focus is on methods employing transformer-based models. A dataset of car accident descriptions with an average length of 400 words, available in English and German, and a dataset with short property insurance claims descriptions are used to demonstrate these techniques. The case studies tackle challenges related to a multi-lingual setting and long input sequences. They also show ways to interpret model output, to assess and improve model performance, by fine-tuning the models to the domain of application or to a specific prediction task. Finally, the tutorial provides practical approaches to handle classification tasks in situations with no or only few labeled data, including but not limited to ChatGPT. The results achieved by using the language-understanding skills of off-the-shelf natural language processing (NLP) models with only minimal pre-processing and fine-tuning clearly demonstrate the power of transfer learning for practical applications.Comment: 47 pages, 33 figures. v3: Added new Section 10 on the use of ChatGPT for unsupervised information extractio

    A Tour of Explicit Multilingual Semantics: Word Sense Disambiguation, Semantic Role Labeling and Semantic Parsing

    Get PDF
    The recent advent of modern pretrained language models has sparked a revolution in Natural Language Processing (NLP), especially in multilingual and cross-lingual applications. Today, such language models have become the de facto standard for providing rich input representations to neural systems, achieving unprecedented results in an increasing range of benchmarks. However, questions that often arise are: firstly, whether current language models are, indeed, able to capture explicit, symbolic meaning; secondly, if they are, to what extent; thirdly, and perhaps more importantly, whether current approaches are capable of scaling across languages. In this cutting-edge tutorial, we will review recent efforts that have aimed at shedding light on meaning in NLP, with a focus on three key open problems in lexical and sentence-level semantics: Word Sense Disambiguation, Semantic Role Labeling, and Semantic Parsing. After a brief introduction, we will spotlight how state-of-the-art models tackle these tasks in multiple languages, showing where they excel and where they fail. We hope that this tutorial will broaden the audience interested in multilingual semantics and inspire researchers to further advance the field

    Big Data Analytics and the Social Web: a Tutorial for the Social Scientist

    Full text link
    The social web or web 2.0 has become the biggest and most accessible repository of data about human (social) behavior in history. Due to a knowledge gap between big data analytics and established social science methodology, this enormous source of information, has yet to be exploited for new and interesting studies in various social and humanities related fields. To make one step towards closing this gap, we provide a detailed step-by-step tutorial on some of the most important web mining and analytics methods on a real-world study of Croatia’s biggest political blogging site. The tutorial covers methods for data retrieval, data conversion, cleansing and organization, data analysis (natural language processing, social and conceptual network analysis) as well as data visualization and interpretation. All tools that have been implemented for the sake of this study, data sets through the various steps as well as resulting visualizations have been published on-line and are free to use. The tutorial is not meant to be a comprehensive overview and detailed description of all possible ways of analyzing data from the social web, but using the steps outlined herein one can certainly reproduce the results of the study or use the same or similar methodology for other datasets. Results of the study show that a special kind of conceptual network generated by natural language processing of articles on the blogging site, namely a conceptual network constructed by the rule that two concepts (keywords) are connected if they were extracted from the same article, seem to be the best predictor of the current political discourse in Croatia when compared to the other constructed conceptual networks. These results indicate that a comprehensive study has to be made to investigate this conceptual structure further with an accent on the dynamic processes that have led to the construction of the network

    An Investigation into the Pedagogical Features of Documents

    Full text link
    Characterizing the content of a technical document in terms of its learning utility can be useful for applications related to education, such as generating reading lists from large collections of documents. We refer to this learning utility as the "pedagogical value" of the document to the learner. While pedagogical value is an important concept that has been studied extensively within the education domain, there has been little work exploring it from a computational, i.e., natural language processing (NLP), perspective. To allow a computational exploration of this concept, we introduce the notion of "pedagogical roles" of documents (e.g., Tutorial and Survey) as an intermediary component for the study of pedagogical value. Given the lack of available corpora for our exploration, we create the first annotated corpus of pedagogical roles and use it to test baseline techniques for automatic prediction of such roles.Comment: 12th Workshop on Innovative Use of NLP for Building Educational Applications (BEA) at EMNLP 2017; 12 page

    An intelligent computer-based tutor for elementary mechanics problems

    Get PDF
    ALBERT, an intelligent problem-solving monitor and coach, has been developed to assist students solving problems in one-dimensional kinematics. Students may type in kinematics problems directly from their textbooks. ALBERT understands the problems, knows how to solve them, and can teach students how to solve them. The program is implemented in the TUTOR language and runs on the Control Data mainframe PLATO system. A natural language interface was designed to understand kinematics problems stated in textbook English. The interface is based on a pattern recognition system which is intended to parallel a cognitive model of language processing. The natural language system has understood over 60 problems taken directly from elementary Physics textbooks. Two problem-solving routines are included in ALBERT. One is goal-directed and solves the problems using the standard kinematic equations. The other uses the definition of acceleration and the relationship between displacement and average velocity to solve the problems. It employs a forward-directed problem-solving strategy. The natural language interface and both the problem-solvers are fast and completely adequate for the task. The tutorial dialogue system uses a modified version of the natural language interface which operates in a two-tier fashion. First an attempt is made to understand the input with the pattern recognition system, and if that fails, a keyword matching system is invoked. The result has been a fairly robust language interface. The tutorial is driven by a tutorial management system (embodying a tutorial model) and a context model. The context model consists of a student model, a tutorial status model and a dynamic dialogue model. ALBERT permits a mixed initiative dialogue in the discussion of a problem. The system has been tested by Physics students in more than 80 problemsolving sessions and the results have been good. The response of the students has been very favourabl

    Recognizing multimodal entailment

    Get PDF
    How information is created, shared and consumed has changed rapidly in recent decades, in part thanks to new social platforms and technologies on the web. With ever-larger amounts of unstructured and limited labels, organizing and reconciling information from different sources and modalities is a central challenge in machine learning. This cutting-edge tutorial aims to introduce the multimodal entailment task, which can be useful for detecting semantic alignments when a single modality alone does not suffice for a whole content understanding. Starting with a brief overview of natural language processing, computer vision, structured data and neural graph learning, we lay the foundations for the multimodal sections to follow. We then discuss recent multimodal learning literature covering visual, audio and language streams, and explore case studies focusing on tasks which require fine-grained understanding of visual and linguistic semantics question answering, veracity and hatred classification. Finally, we introduce a new dataset for recognizing multimodal entailment, exploring it in a hands-on collaborative section. Overall, this tutorial gives an overview of multimodal learning, introduces a multimodal entailment dataset, and encourages future research in the topic

    Question Answering over Curated and Open Web Sources

    Get PDF
    The last few years have seen an explosion of research on the topic of automated question answering (QA), spanning the communities of information retrieval, natural language processing, and artificial intelligence. This tutorial would cover the highlights of this really active period of growth for QA to give the audience a grasp over the families of algorithms that are currently being used. We partition research contributions by the underlying source from where answers are retrieved: curated knowledge graphs, unstructured text, or hybrid corpora. We choose this dimension of partitioning as it is the most discriminative when it comes to algorithm design. Other key dimensions are covered within each sub-topic: like the complexity of questions addressed, and degrees of explainability and interactivity introduced in the systems. We would conclude the tutorial with the most promising emerging trends in the expanse of QA, that would help new entrants into this field make the best decisions to take the community forward. Much has changed in the community since the last tutorial on QA in SIGIR 2016, and we believe that this timely overview will indeed benefit a large number of conference participants
    corecore