287 research outputs found

    The Science of Detecting LLM-Generated Texts

    Full text link
    The emergence of large language models (LLMs) has resulted in the production of LLM-generated texts that is highly sophisticated and almost indistinguishable from texts written by humans. However, this has also sparked concerns about the potential misuse of such texts, such as spreading misinformation and causing disruptions in the education system. Although many detection approaches have been proposed, a comprehensive understanding of the achievements and challenges is still lacking. This survey aims to provide an overview of existing LLM-generated text detection techniques and enhance the control and regulation of language generation models. Furthermore, we emphasize crucial considerations for future research, including the development of comprehensive evaluation metrics and the threat posed by open-source LLMs, to drive progress in the area of LLM-generated text detection

    SPeC: A Soft Prompt-Based Calibration on Mitigating Performance Variability in Clinical Notes Summarization

    Full text link
    Electronic health records (EHRs) store an extensive array of patient information, encompassing medical histories, diagnoses, treatments, and test outcomes. These records are crucial for enabling healthcare providers to make well-informed decisions regarding patient care. Summarizing clinical notes further assists healthcare professionals in pinpointing potential health risks and making better-informed decisions. This process contributes to reducing errors and enhancing patient outcomes by ensuring providers have access to the most pertinent and current patient data. Recent research has shown that incorporating prompts with large language models (LLMs) substantially boosts the efficacy of summarization tasks. However, we show that this approach also leads to increased output variance, resulting in notably divergent outputs even when prompts share similar meanings. To tackle this challenge, we introduce a model-agnostic Soft Prompt-Based Calibration (SPeC) pipeline that employs soft prompts to diminish variance while preserving the advantages of prompt-based summarization. Experimental findings on multiple clinical note tasks and LLMs indicate that our method not only bolsters performance but also effectively curbs variance for various LLMs, providing a more uniform and dependable solution for summarizing vital medical information

    On determination of the geometric cosmological constant from the OPERA experiment of superluminal neutrinos

    Full text link
    The recent OPERA experiment of superluminal neutrinos has deep consequences in cosmology. In cosmology a fundamental constant is the cosmological constant. From observations one can estimate the effective cosmological constant Λeff\Lambda_{eff} which is the sum of the quantum zero point energy Λdarkenergy\Lambda_{dark energy} and the geometric cosmological constant Λ\Lambda. The OPERA experiment can be applied to determine the geometric cosmological constant Λ\Lambda. It is the first time to distinguish the contributions of Λ\Lambda and Λdarkenergy\Lambda_{dark energy} from each other by experiment. The determination is based on an explanation of the OPERA experiment in the framework of Special Relativity with de Sitter space-time symmetry.Comment: 7 pages, no figure

    Efficient XAI Techniques: A Taxonomic Survey

    Full text link
    Recently, there has been a growing demand for the deployment of Explainable Artificial Intelligence (XAI) algorithms in real-world applications. However, traditional XAI methods typically suffer from a high computational complexity problem, which discourages the deployment of real-time systems to meet the time-demanding requirements of real-world scenarios. Although many approaches have been proposed to improve the efficiency of XAI methods, a comprehensive understanding of the achievements and challenges is still needed. To this end, in this paper we provide a review of efficient XAI. Specifically, we categorize existing techniques of XAI acceleration into efficient non-amortized and efficient amortized methods. The efficient non-amortized methods focus on data-centric or model-centric acceleration upon each individual instance. In contrast, amortized methods focus on learning a unified distribution of model explanations, following the predictive, generative, or reinforcement frameworks, to rapidly derive multiple model explanations. We also analyze the limitations of an efficient XAI pipeline from the perspectives of the training phase, the deployment phase, and the use scenarios. Finally, we summarize the challenges of deploying XAI acceleration methods to real-world scenarios, overcoming the trade-off between faithfulness and efficiency, and the selection of different acceleration methods.Comment: 15 pages, 3 figure

    Prevalence of Pediculus Capitis Infestation Among School Children of Chinese Refugees Residing in Mountainous Areas of Northern Thailand

    Get PDF
    An epidemiologic survey of Pediculus capitis infestation among Akka aboriginal and Han children of Chinese refugees living in mountainous areas at elevations of 1,100 to 1,400 m in Chiang-Rai Province of northern Thailand was conducted during January 2003. Of the 303 children examined, 43 (14.2%) had P. capitis infestation. The overall infestation rate for P. capitis in Akka children (29.3%, 12/41) was significantly higher than that in Han children (11.8%, 31/262; c2 = 8.161, p = 0.002). The prevalence in Akka (52.2%, 12/23) and Han girls (19.7%, 31/157) was higher than that in Akka (0%) and Han boys (0%), respectively (p < 0.001), and the prevalence was higher in Akka girls than in Han girls (c2 = 10.978, p = 0.001). The high prevalence of P. capitis infestation among these girls was possibly due to poor environmental hygiene and unavailability of sufficient water

    Towards Assumption-free Bias Mitigation

    Full text link
    Despite the impressive prediction ability, machine learning models show discrimination towards certain demographics and suffer from unfair prediction behaviors. To alleviate the discrimination, extensive studies focus on eliminating the unequal distribution of sensitive attributes via multiple approaches. However, due to privacy concerns, sensitive attributes are often either unavailable or missing in real-world scenarios. Therefore, several existing works alleviate the bias without sensitive attributes. Those studies face challenges, either in inaccurate predictions of sensitive attributes or the need to mitigate unequal distribution of manually defined non-sensitive attributes related to bias. The latter requires strong assumptions about the correlation between sensitive and non-sensitive attributes. As data distribution and task goals vary, the strong assumption on non-sensitive attributes may not be valid and require domain expertise. In this work, we propose an assumption-free framework to detect the related attributes automatically by modeling feature interaction for bias mitigation. The proposed framework aims to mitigate the unfair impact of identified biased feature interactions. Experimental results on four real-world datasets demonstrate that our proposed framework can significantly alleviate unfair prediction behaviors by considering biased feature interactions

    A Survey of Deep Learning in Sports Applications: Perception, Comprehension, and Decision

    Full text link
    Deep learning has the potential to revolutionize sports performance, with applications ranging from perception and comprehension to decision. This paper presents a comprehensive survey of deep learning in sports performance, focusing on three main aspects: algorithms, datasets and virtual environments, and challenges. Firstly, we discuss the hierarchical structure of deep learning algorithms in sports performance which includes perception, comprehension and decision while comparing their strengths and weaknesses. Secondly, we list widely used existing datasets in sports and highlight their characteristics and limitations. Finally, we summarize current challenges and point out future trends of deep learning in sports. Our survey provides valuable reference material for researchers interested in deep learning in sports applications
    • …
    corecore