3,798 research outputs found

    A Documentation and Analysis of Surdna Arts Teachers Fellowship Program (SATF): The First Decade 2000-2010

    Get PDF
    Based on documents, interviews, and site visits, reviews the design and impact of a program to boost teaching and learning quality at public arts high schools by supporting teachers' artistic and professional development. Lists issues for consideration

    A Generative Adversarial Networks Based Approach for Literary Translation

    Get PDF
    This study aims to solve the problem of mistranslation due to the fact that literary intelligent translation only stays at the stage of text description and elaboration and lacks relevant facts. Therefore, this paper puts forward an improvement method of literary intelligent translation text based on generation confrontation network. First, an adaptive literary intelligent translation mode is designed under the generation confrontation network, and then the data of literary intelligent translation text improvement is preprocessed, and the data mining of text improvement quality evaluation is carried out. According to the mining results, a literary intelligent translation text improvement quality evaluation model is constructed to evaluate the quality of literary intelligent translation text improvement. According to the quality results, this paper constructs the improvement model of literary intelligent translation text, designs the improvement process, and completes the research on the improvement method of literary intelligent translation text that generates confrontation network. The experimental results show that this method has better detection effect of mistranslation features, better stability of the improved method, accurate and reliable results, and can improve the literary literacy of students and teachers

    A Survey of GPT-3 Family Large Language Models Including ChatGPT and GPT-4

    Full text link
    Large language models (LLMs) are a special class of pretrained language models obtained by scaling model size, pretraining corpus and computation. LLMs, because of their large size and pretraining on large volumes of text data, exhibit special abilities which allow them to achieve remarkable performances without any task-specific training in many of the natural language processing tasks. The era of LLMs started with OpenAI GPT-3 model, and the popularity of LLMs is increasing exponentially after the introduction of models like ChatGPT and GPT4. We refer to GPT-3 and its successor OpenAI models, including ChatGPT and GPT4, as GPT-3 family large language models (GLLMs). With the ever-rising popularity of GLLMs, especially in the research community, there is a strong need for a comprehensive survey which summarizes the recent research progress in multiple dimensions and can guide the research community with insightful future research directions. We start the survey paper with foundation concepts like transformers, transfer learning, self-supervised learning, pretrained language models and large language models. We then present a brief overview of GLLMs and discuss the performances of GLLMs in various downstream tasks, specific domains and multiple languages. We also discuss the data labelling and data augmentation abilities of GLLMs, the robustness of GLLMs, the effectiveness of GLLMs as evaluators, and finally, conclude with multiple insightful future research directions. To summarize, this comprehensive survey paper will serve as a good resource for both academic and industry people to stay updated with the latest research related to GPT-3 family large language models.Comment: Preprint under review, 58 page

    A Survey of Methods for Addressing Class Imbalance in Deep-Learning Based Natural Language Processing

    Full text link
    Many natural language processing (NLP) tasks are naturally imbalanced, as some target categories occur much more frequently than others in the real world. In such scenarios, current NLP models still tend to perform poorly on less frequent classes. Addressing class imbalance in NLP is an active research topic, yet, finding a good approach for a particular task and imbalance scenario is difficult. With this survey, the first overview on class imbalance in deep-learning based NLP, we provide guidance for NLP researchers and practitioners dealing with imbalanced data. We first discuss various types of controlled and real-world class imbalance. Our survey then covers approaches that have been explicitly proposed for class-imbalanced NLP tasks or, originating in the computer vision community, have been evaluated on them. We organize the methods by whether they are based on sampling, data augmentation, choice of loss function, staged learning, or model design. Finally, we discuss open problems such as dealing with multi-label scenarios, and propose systematic benchmarking and reporting in order to move forward on this problem as a community

    Social media mental health analysis framework through applied computational approaches

    Get PDF
    Studies have shown that mental illness burdens not only public health and productivity but also established market economies throughout the world. However, mental disorders are difficult to diagnose and monitor through traditional methods, which heavily rely on interviews, questionnaires and surveys, resulting in high under-diagnosis and under-treatment rates. The increasing use of online social media, such as Facebook and Twitter, is now a common part of people’s everyday life. The continuous and real-time user-generated content often reflects feelings, opinions, social status and behaviours of individuals, creating an unprecedented wealth of person-specific information. With advances in data science, social media has already been increasingly employed in population health monitoring and more recently mental health applications to understand mental disorders as well as to develop online screening and intervention tools. However, existing research efforts are still in their infancy, primarily aimed at highlighting the potential of employing social media in mental health research. The majority of work is developed on ad hoc datasets and lacks a systematic research pipeline. [Continues.]</div
    • …
    corecore