98 research outputs found

    Perceptions of Heroism: Characteristics, Functions and Influencing Factors among Chinese College Students in the Post-pandemic Era

    Get PDF
    Heroes play a significant role in shaping the popular perceptions of morality, justice, and social values in general. During the Covid-19 pandemic, people’s anticipation for heroes doubles and their heroism may be reshaped by the pandemic. This paper attempts to investigate the perceived heroism of Chinese higher education students(n=847) in the post-pandemic era by means of the online questionnaire. Firstly, we explore the main characteristics of heroes worshipped by Chinese higher education students, which are summarized as diversified, epoch-making and civilian. Then we investigate the functions of heroes, which are categorized as enhancing, moral modeling and protecting. Finally, we analyze the five factors (intrinsic attraction, social reinforcement, education, family background and publicity) that may predict students’ heroism worship. As the regression analysis reveals, the five factors have significantly positive influences on higher education students’ perceptions of heroism and the weights of intrinsic attraction, social reinforcement, publicity, family background and education are 0.364, 0.316, 0.227, 0.190 and 0.156 respectively. These findings not only provide a theoretical and empirical contribution to the study of heroism, but also help develop Chinese higher education sustainable development in the post-pandemic era

    Do Large Language Models Know What They Don't Know?

    Full text link
    Large language models (LLMs) have a wealth of knowledge that allows them to excel in various Natural Language Processing (NLP) tasks. Current research focuses on enhancing their performance within their existing knowledge. Despite their vast knowledge, LLMs are still limited by the amount of information they can accommodate and comprehend. Therefore, the ability to understand their own limitations on the unknows, referred to as self-knowledge, is of paramount importance. This study aims to evaluate LLMs' self-knowledge by assessing their ability to identify unanswerable or unknowable questions. We introduce an automated methodology to detect uncertainty in the responses of these models, providing a novel measure of their self-knowledge. We further introduce a unique dataset, SelfAware, consisting of unanswerable questions from five diverse categories and their answerable counterparts. Our extensive analysis, involving 20 LLMs including GPT-3, InstructGPT, and LLaMA, discovering an intrinsic capacity for self-knowledge within these models. Moreover, we demonstrate that in-context learning and instruction tuning can further enhance this self-knowledge. Despite this promising insight, our findings also highlight a considerable gap between the capabilities of these models and human proficiency in recognizing the limits of their knowledge.Comment: 10 pages, 9 figures, accepted by Findings of ACL202

    Call Sequence Prediction through Probabilistic Calling Automata

    Get PDF
    Predicting a sequence of upcoming function calls is important for optimizing programs written in modern managed languages (e.g., Java, Javascript, C#.) Existing function call predictions are mainly built on statistical patterns, suitable for predicting a single call but not a sequence of calls. This paper presents a new way to enable call sequence prediction, which exploits program structures through Probabilistic Calling Automata (PCA), a new program representation that captures both the inherent ensuing relations among function calls, and the probabilistic nature of execution paths. It shows that PCA-based prediction outperforms existing predictions, yielding substantial speedup when being applied to guide Just-In-Time compilation. By enabling accurate, efficient call sequence prediction for the first time, PCA-based predictors open up many new opportunities for dynamic program optimizations

    COLO: A Contrastive Learning based Re-ranking Framework for One-Stage Summarization

    Full text link
    Traditional training paradigms for extractive and abstractive summarization systems always only use token-level or sentence-level training objectives. However, the output summary is always evaluated from summary-level which leads to the inconsistency in training and evaluation. In this paper, we propose a Contrastive Learning based re-ranking framework for one-stage summarization called COLO. By modeling a contrastive objective, we show that the summarization model is able to directly generate summaries according to the summary-level score without additional modules and parameters. Extensive experiments demonstrate that COLO boosts the extractive and abstractive results of one-stage systems on CNN/DailyMail benchmark to 44.58 and 46.33 ROUGE-1 score while preserving the parameter efficiency and inference efficiency. Compared with state-of-the-art multi-stage systems, we save more than 100 GPU training hours and obtaining 3~8 speed-up ratio during inference while maintaining comparable results.Comment: Accepted by COLING 202

    A Micro EIT Sensor for Real-time and Non-destructive 3-D Cultivated Cell Imaging

    Get PDF

    Immune checkpoint inhibitors in colorectal cancer: limitation and challenges

    Get PDF
    Colorectal cancer exhibits a notable prevalence and propensity for metastasis, but the current therapeutic interventions for metastatic colorectal cancer have yielded suboptimal results. ICIs can decrease tumor development by preventing the tumor’s immune evasion, presenting cancer patients with a new treatment alternative. The increased use of immune checkpoint inhibitors (ICIs) in CRC has brought several issues. In particular, ICIs have demonstrated significant clinical effectiveness in patients with MSI-H CRC, whereas their efficacy is limited in MSS. Acquired resistance can still occur in patients with a positive response to ICIs. This paper describes the efficacy of ICIs currently in the clinical treatment of CRC, discusses the mechanisms by which acquired resistance occurs, primarily related to loss and impaired presentation of tumor antigens, reduced response of IFN-λ and cytokine or metabolic dysregulation, and summarizes the incidence of adverse effects. We posit that the future of ICIs hinges upon the advancement of precise prediction biomarkers and the implementation of combination therapies. This study aims to elucidate the constraints associated with ICIs in CRC and foster targeted problem-solving approaches, thereby enhancing the potential benefits for more patients

    Rethinking Label Smoothing on Multi-hop Question Answering

    Full text link
    Multi-Hop Question Answering (MHQA) is a significant area in question answering, requiring multiple reasoning components, including document retrieval, supporting sentence prediction, and answer span extraction. In this work, we analyze the primary factors limiting the performance of multi-hop reasoning and introduce label smoothing into the MHQA task. This is aimed at enhancing the generalization capabilities of MHQA systems and mitigating overfitting of answer spans and reasoning paths in training set. We propose a novel label smoothing technique, F1 Smoothing, which incorporates uncertainty into the learning process and is specifically tailored for Machine Reading Comprehension (MRC) tasks. Inspired by the principles of curriculum learning, we introduce the Linear Decay Label Smoothing Algorithm (LDLA), which progressively reduces uncertainty throughout the training process. Experiment on the HotpotQA dataset demonstrates the effectiveness of our methods in enhancing performance and generalizability in multi-hop reasoning, achieving new state-of-the-art results on the leaderboard.Comment: 13 pages, 8 figures, accepted by CCL202
    • …
    corecore