2,098 research outputs found

    Exploring differential topic models for comparative summarization of scientific papers

    Get PDF
    This paper investigates differential topic models (dTM) for summarizing the differences among document groups. Starting from a simple probabilistic generative model, we propose dTM-SAGE that explicitly models the deviations on group-specific word distributions to indicate how words are used differentially across different document groups from a background word distribution. It is more effective to capture unique characteristics for comparing document groups. To generate dTM-based comparative summaries, we propose two sentence scoring methods for measuring the sentence discriminative capacity. Experimental results on scientific papers dataset show that our dTM-based comparative summarization methods significantly outperform the generic baselines and the state-of-the-art comparative summarization methods under ROUGE metrics

    Research on the automatic construction of the resource space model for scientific literature

    Get PDF
    The resource space model is a semantic data model to organize Web resources based on a classification of resources. The scientific resource space is an application of the resource space model on massive scientific literature resources. The construction of a scientific resource space needs to build a category (or concept) hierarchy and classify resources. Manual design suffers from heavy workload and low efficiency. In this thesis, we propose novel methods to solve the following two problems in the construction of a scientific resource space: 1. Automatic maintenance of a category hierarchy. A category hierarchy needs to evolve dynamically with new resources continually arriving so as to satisfy the dynamic re-quirements of the organization and management of resources. We propose an automatic maintenance approach to modifying the category hierarchy according to the hierarchical clustering of resources and show the effectiveness of this method by a series of comparison experiments on multiple datasets. 2. Automatic construction of a concept hierarchy. We propose a joint extraction model based on a deep neural network to extract entities and relations from scientific articles and build a concept hierarchy. Experimental results show the effectiveness of the joint model on the Semeval 2017 Task 10 dataset. We also implement a prototype system of the scientific resource space. The prototype system enables the comparative summarization on scientific articles. A set of novel comparative summarization methods based on the differential topic models (dTM) are proposed in this thesis. The effectiveness of the dTM-based methods is shown by a series of experimental results

    A Survey of GPT-3 Family Large Language Models Including ChatGPT and GPT-4

    Full text link
    Large language models (LLMs) are a special class of pretrained language models obtained by scaling model size, pretraining corpus and computation. LLMs, because of their large size and pretraining on large volumes of text data, exhibit special abilities which allow them to achieve remarkable performances without any task-specific training in many of the natural language processing tasks. The era of LLMs started with OpenAI GPT-3 model, and the popularity of LLMs is increasing exponentially after the introduction of models like ChatGPT and GPT4. We refer to GPT-3 and its successor OpenAI models, including ChatGPT and GPT4, as GPT-3 family large language models (GLLMs). With the ever-rising popularity of GLLMs, especially in the research community, there is a strong need for a comprehensive survey which summarizes the recent research progress in multiple dimensions and can guide the research community with insightful future research directions. We start the survey paper with foundation concepts like transformers, transfer learning, self-supervised learning, pretrained language models and large language models. We then present a brief overview of GLLMs and discuss the performances of GLLMs in various downstream tasks, specific domains and multiple languages. We also discuss the data labelling and data augmentation abilities of GLLMs, the robustness of GLLMs, the effectiveness of GLLMs as evaluators, and finally, conclude with multiple insightful future research directions. To summarize, this comprehensive survey paper will serve as a good resource for both academic and industry people to stay updated with the latest research related to GPT-3 family large language models.Comment: Preprint under review, 58 page

    PersoNER: Persian named-entity recognition

    Full text link
    © 1963-2018 ACL. Named-Entity Recognition (NER) is still a challenging task for languages with low digital resources. The main difficulties arise from the scarcity of annotated corpora and the consequent problematic training of an effective NER pipeline. To abridge this gap, in this paper we target the Persian language that is spoken by a population of over a hundred million people world-wide. We first present and provide ArmanPerosNERCorpus, the first manually-annotated Persian NER corpus. Then, we introduce PersoNER, an NER pipeline for Persian that leverages a word embedding and a sequential max-margin classifier. The experimental results show that the proposed approach is capable of achieving interesting MUC7 and CoNNL scores while outperforming two alternatives based on a CRF and a recurrent neural network

    Reinforcement Learning for Generative AI: A Survey

    Full text link
    Deep Generative AI has been a long-standing essential topic in the machine learning community, which can impact a number of application areas like text generation and computer vision. The major paradigm to train a generative model is maximum likelihood estimation, which pushes the learner to capture and approximate the target data distribution by decreasing the divergence between the model distribution and the target distribution. This formulation successfully establishes the objective of generative tasks, while it is incapable of satisfying all the requirements that a user might expect from a generative model. Reinforcement learning, serving as a competitive option to inject new training signals by creating new objectives that exploit novel signals, has demonstrated its power and flexibility to incorporate human inductive bias from multiple angles, such as adversarial learning, hand-designed rules and learned reward model to build a performant model. Thereby, reinforcement learning has become a trending research field and has stretched the limits of generative AI in both model design and application. It is reasonable to summarize and conclude advances in recent years with a comprehensive review. Although there are surveys in different application areas recently, this survey aims to shed light on a high-level review that spans a range of application areas. We provide a rigorous taxonomy in this area and make sufficient coverage on various models and applications. Notably, we also surveyed the fast-developing large language model area. We conclude this survey by showing the potential directions that might tackle the limit of current models and expand the frontiers for generative AI

    In Search of the Perfect Prompt

    Get PDF
    The study investigates the efficacy of soft and hard prompt strategies in the scientific domain, namely in the tasks of conversational abstract generation. The proposed approach incorporates two distinct methods, prompt engineering and prompt tuning, within a Conversational Recommender System (CRS). The primary objective of this system is to aid users in generating abstracts for their research. The present study employs an evaluation approach that integrates user research with objective performance criteria. This study examines the strengths and disadvantages associated with both categories of prompts, commencing with an analysis of existing literature on CRS and prompting studies, and subsequently conducting original research tests. This study makes three primary contributions. Initially, a compilation of prerequisites and hypothetical situations is formed by an examination of the issue at hand. This wish list presents a range of potential technological, user, and functional views that have the potential to contribute to future studies in this area. Furthermore, the examination of user studies is an integral element of our evaluation methodology. During this process, we analyze many factors pertaining to the 6 participants, including their cognitive load, response time, and overall happiness while applying challenging prompts within the CRS. In our investigation, we examine the behavior and needs of the target demographic, consisting of academics and researchers. Our findings suggest a tendency among this group to favor interactions that are focused on factual information and question-and-answer exchanges, as opposed to more expansive and conversational encounters. Thirdly, our study delves into the comprehensibility and relevance of the generated abstracts, utilizing well-established criteria such as Rouge and F1 scores. In our research, the anticipated effect of combining prompts with text-generation tasks is to produce scientific abstracts that are imprecise and broader in nature. However, this objective contradicts the expectations of the users. The research findings shed light on the difficulties and advantages that arise from implementing prompting techniques with a CRS. This study makes a valuable contribution by recognizing the importance of contextual comprehension and employing prompting strategies from both technical and user-centric viewpoints. One of the primary findings is that it is crucial to customize prompt tactics in accordance with user preferences and domain demands. The given findings contribute to the existing body of knowledge on conversational recommender systems and their applications in the field of natural language processing

    Evaluating Information Retrieval and Access Tasks

    Get PDF
    This open access book summarizes the first two decades of the NII Testbeds and Community for Information access Research (NTCIR). NTCIR is a series of evaluation forums run by a global team of researchers and hosted by the National Institute of Informatics (NII), Japan. The book is unique in that it discusses not just what was done at NTCIR, but also how it was done and the impact it has achieved. For example, in some chapters the reader sees the early seeds of what eventually grew to be the search engines that provide access to content on the World Wide Web, today’s smartphones that can tailor what they show to the needs of their owners, and the smart speakers that enrich our lives at home and on the move. We also get glimpses into how new search engines can be built for mathematical formulae, or for the digital record of a lived human life. Key to the success of the NTCIR endeavor was early recognition that information access research is an empirical discipline and that evaluation therefore lay at the core of the enterprise. Evaluation is thus at the heart of each chapter in this book. They show, for example, how the recognition that some documents are more important than others has shaped thinking about evaluation design. The thirty-three contributors to this volume speak for the many hundreds of researchers from dozens of countries around the world who together shaped NTCIR as organizers and participants. This book is suitable for researchers, practitioners, and students—anyone who wants to learn about past and present evaluation efforts in information retrieval, information access, and natural language processing, as well as those who want to participate in an evaluation task or even to design and organize one

    Exploring large language model for next generation of artificial intelligence in ophthalmology

    Get PDF
    In recent years, ophthalmology has advanced significantly, thanks to rapid progress in artificial intelligence (AI) technologies. Large language models (LLMs) like ChatGPT have emerged as powerful tools for natural language processing. This paper finally includes 108 studies, and explores LLMs’ potential in the next generation of AI in ophthalmology. The results encompass a diverse range of studies in the field of ophthalmology, highlighting the versatile applications of LLMs. Subfields encompass general ophthalmology, retinal diseases, anterior segment diseases, glaucoma, and ophthalmic plastics. Results show LLMs’ competence in generating informative and contextually relevant responses, potentially reducing diagnostic errors and improving patient outcomes. Overall, this study highlights LLMs’ promising role in shaping AI’s future in ophthalmology. By leveraging AI, ophthalmologists can access a wealth of information, enhance diagnostic accuracy, and provide better patient care. Despite challenges, continued AI advancements and ongoing research will pave the way for the next generation of AI-assisted ophthalmic practices
    • …
    corecore