32 research outputs found

    Revisiting k-NN for Pre-trained Language Models

    Full text link
    Pre-trained Language Models (PLMs), as parametric-based eager learners, have become the de-facto choice for current paradigms of Natural Language Processing (NLP). In contrast, k-Nearest-Neighbor (k-NN) classifiers, as the lazy learning paradigm, tend to mitigate over-fitting and isolated noise. In this paper, we revisit k-NN classifiers for augmenting the PLMs-based classifiers. From the methodological level, we propose to adopt k-NN with textual representations of PLMs in two steps: (1) Utilize k-NN as prior knowledge to calibrate the training process. (2) Linearly interpolate the probability distribution predicted by k-NN with that of the PLMs' classifier. At the heart of our approach is the implementation of k-NN-calibrated training, which treats predicted results as indicators for easy versus hard examples during the training process. From the perspective of the diversity of application scenarios, we conduct extensive experiments on fine-tuning, prompt-tuning paradigms and zero-shot, few-shot and fully-supervised settings, respectively, across eight diverse end-tasks. We hope our exploration will encourage the community to revisit the power of classical methods for efficient NLP\footnote{Code and datasets are available in https://github.com/zjunlp/Revisit-KNN.Comment: Work in progres

    Editing Language Model-based Knowledge Graph Embeddings

    Full text link
    Recently decades have witnessed the empirical success of framing Knowledge Graph (KG) embeddings via language models. However, language model-based KG embeddings are usually deployed as static artifacts, which are challenging to modify without re-training after deployment. To address this issue, we propose a new task of editing language model-based KG embeddings in this paper. The proposed task aims to enable data-efficient and fast updates to KG embeddings without damaging the performance of the rest. We build four new datasets: E-FB15k237, A-FB15k237, E-WN18RR, and A-WN18RR, and evaluate several knowledge editing baselines demonstrating the limited ability of previous models to handle the proposed challenging task. We further propose a simple yet strong baseline dubbed KGEditor, which utilizes additional parametric layers of the hyper network to edit/add facts. Comprehensive experimental results demonstrate that KGEditor can perform better when updating specific facts while not affecting the rest with low training resources. Code and datasets will be available in https://github.com/zjunlp/PromptKG/tree/main/deltaKG.Comment: Work in progress and the project website is https://zjunlp.github.io/project/KGE_Editing

    Editing Large Language Models: Problems, Methods, and Opportunities

    Full text link
    Despite the ability to train capable LLMs, the methodology for maintaining their relevancy and rectifying errors remains elusive. To this end, the past few years have witnessed a surge in techniques for editing LLMs, the objective of which is to efficiently alter the behavior of LLMs within a specific domain without negatively impacting performance across other inputs. This paper embarks on a deep exploration of the problems, methods, and opportunities related to model editing for LLMs. In particular, we provide an exhaustive overview of the task definition and challenges associated with model editing, along with an in-depth empirical analysis of the most progressive methods currently at our disposal. We also build a new benchmark dataset to facilitate a more robust evaluation and pinpoint enduring issues intrinsic to existing techniques. Our objective is to provide valuable insights into the effectiveness and feasibility of each editing technique, thereby assisting the community in making informed decisions on the selection of the most appropriate method for a specific task or context. Code and datasets are available at https://github.com/zjunlp/EasyEdit.Comment: EMNLP 2023. Updated with new experiment

    MIKE: A New Benchmark for Fine-grained Multimodal Entity Knowledge Editing

    Full text link
    Multimodal knowledge editing represents a critical advancement in enhancing the capabilities of Multimodal Large Language Models (MLLMs). Despite its potential, current benchmarks predominantly focus on coarse-grained knowledge, leaving the intricacies of fine-grained (FG) multimodal entity knowledge largely unexplored. This gap presents a notable challenge, as FG entity recognition is pivotal for the practical deployment and effectiveness of MLLMs in diverse real-world scenarios. To bridge this gap, we introduce MIKE, a comprehensive benchmark and dataset specifically designed for the FG multimodal entity knowledge editing. MIKE encompasses a suite of tasks tailored to assess different perspectives, including Vanilla Name Answering, Entity-Level Caption, and Complex-Scenario Recognition. In addition, a new form of knowledge editing, Multi-step Editing, is introduced to evaluate the editing efficiency. Through our extensive evaluations, we demonstrate that the current state-of-the-art methods face significant challenges in tackling our proposed benchmark, underscoring the complexity of FG knowledge editing in MLLMs. Our findings spotlight the urgent need for novel approaches in this domain, setting a clear agenda for future research and development efforts within the community.Comment: 8 page

    EasyEdit: An Easy-to-use Knowledge Editing Framework for Large Language Models

    Full text link
    Large Language Models (LLMs) usually suffer from knowledge cutoff or fallacy issues, which means they are unaware of unseen events or generate text with incorrect facts owing to the outdated/noisy data. To this end, many knowledge editing approaches for LLMs have emerged -- aiming to subtly inject/edit updated knowledge or adjust undesired behavior while minimizing the impact on unrelated inputs. Nevertheless, due to significant differences among various knowledge editing methods and the variations in task setups, there is no standard implementation framework available for the community, which hinders practitioners to apply knowledge editing to applications. To address these issues, we propose EasyEdit, an easy-to-use knowledge editing framework for LLMs. It supports various cutting-edge knowledge editing approaches and can be readily apply to many well-known LLMs such as T5, GPT-J, LlaMA, etc. Empirically, we report the knowledge editing results on LlaMA-2 with EasyEdit, demonstrating that knowledge editing surpasses traditional fine-tuning in terms of reliability and generalization. We have released the source code on GitHub at https://github.com/zjunlp/EasyEdit, along with Google Colab tutorials and comprehensive documentation for beginners to get started. Besides, we present an online system for real-time knowledge editing, and a demo video at http://knowlm.zjukg.cn/easyedit.mp4.Comment: The project website is https://github.com/zjunlp/EasyEdi

    A Comprehensive Study of Knowledge Editing for Large Language Models

    Full text link
    Large Language Models (LLMs) have shown extraordinary capabilities in understanding and generating text that closely mirrors human communication. However, a primary limitation lies in the significant computational demands during training, arising from their extensive parameterization. This challenge is further intensified by the dynamic nature of the world, necessitating frequent updates to LLMs to correct outdated information or integrate new knowledge, thereby ensuring their continued relevance. Note that many applications demand continual model adjustments post-training to address deficiencies or undesirable behaviors. There is an increasing interest in efficient, lightweight methods for on-the-fly model modifications. To this end, recent years have seen a burgeoning in the techniques of knowledge editing for LLMs, which aim to efficiently modify LLMs' behaviors within specific domains while preserving overall performance across various inputs. In this paper, we first define the knowledge editing problem and then provide a comprehensive review of cutting-edge approaches. Drawing inspiration from educational and cognitive research theories, we propose a unified categorization criterion that classifies knowledge editing methods into three groups: resorting to external knowledge, merging knowledge into the model, and editing intrinsic knowledge. Furthermore, we introduce a new benchmark, KnowEdit, for a comprehensive empirical evaluation of representative knowledge editing approaches. Additionally, we provide an in-depth analysis of knowledge location, which can give a deeper understanding of the knowledge structures inherent within LLMs. Finally, we discuss several potential applications of knowledge editing, outlining its broad and impactful implications.Comment: Ongoing work; 52 pages, 282 citations; benchmark is available at https://huggingface.co/datasets/zjunlp/KnowEdit code is available at https://github.com/zjunlp/EasyEdit paper list is available at https://github.com/zjunlp/KnowledgeEditingPaper

    The Jiao Tong University Spectroscopic Telescope Project

    Full text link
    The Jiao Tong University Spectroscopic Telescope (JUST) is a 4.4-meter f/6.0 segmentedmirror telescope dedicated to spectroscopic observations. The JUST primary mirror is composed of 18 hexagonal segments, each with a diameter of 1.1 m. JUST provides two Nasmyth platforms for placing science instruments. One Nasmyth focus fits a field of view of 10 arcmin and the other has an extended field of view of 1.2 deg with correction optics. A tertiary mirror is used to switch between the two Nasmyth foci. JUST will be installed at a site at Lenghu in Qinghai Province, China, and will conduct spectroscopic observations with three types of instruments to explore the dark universe, trace the dynamic universe, and search for exoplanets: (1) a multi-fiber (2000 fibers) medium-resolution spectrometer (R=4000-5000) to spectroscopically map galaxies and large-scale structure; (2) an integral field unit (IFU) array of 500 optical fibers and/or a long-slit spectrograph dedicated to fast follow-ups of transient sources for multimessenger astronomy; (3) a high-resolution spectrometer (R~100000) designed to identify Jupiter analogs and Earth-like planets, with the capability to characterize the atmospheres of hot exoplanets.Comment: 28 pages, 6 figure

    The AST3-NIR Camera for the Kunlun Infrared Sky Survey

    Get PDF
    AST3-NIR is a new infrared camera for deployment with the AST3-3 wide-field survey telescope to Dome A on the Antarctic plateau. This project is designed to take advantage of the low Antarctic infrared sky thermal background (particularly within the Kdark near infrared atmospheric window at 2.4 μm) and the long Antarctic nights to provide high sensitivity temporal data from astronomical sources. The data collected from the Kunlun Infrared Sky Survey (KISS) will be used to conduct a range of astronomical science cases including the study of supernovae, exo-planets, variable stars, and the cosmic infrared background

    Exoplanets in the Antarctic Sky I. The first data release of AST3-II (CHESPA) and new found variables within the southern CVZ of TESS

    Get PDF
    Located at Dome A, the highest point of the Antarctic plateau, the Chinese Kunlun station is considered to be one of the best ground-based photometric sites because of its extremely cold, dry, and stable atmosphere. A target can be monitored from there for over 40 days without diurnal interruption during a polar winter. This makes Kunlun station a perfect site to search for short-period transiting exoplanets. Since 2008, an observatory has existed at Kunlun station, and three telescopes are working there. Using these telescopes, the AST3 project has been carried out over the last 6 yr with a search for transiting exoplanets as one of its key programs (CHESPA). In the austral winters of 2016 and 2017, a set of target fields in the southern continuous viewing zone (CVZ) of TESS were monitored by the AST3-II telescope. In this paper, we introduce the CHESPA and present the first data release containing photometry of 26,578 bright stars (m(i) <= 15). The best photometric precision at the optimum magnitude for the survey is around 2 mmag. To demonstrate the data quality, we also present a catalog of 221 variables with a brightness variation greater than 5 mmag from the 2016 data. Among these variables, 179 are newly identified periodic variables not listed in the AAVSO database (https://www.aavso.org/), and 67 are listed in the Candidate Target List. These variables will require careful attention to avoid false-positive signals when searching for transiting exoplanets. Dozens of new transiting exoplanet candidates will be released in a subsequent paper

    The AST3-NIR Camera for the Kunlun Infrared Sky Survey

    Get PDF
    AST3-NIR is a new infrared camera for deployment with the AST3-3 wide-field survey telescope to Dome A on the Antarctic plateau. This project is designed to take advantage of the low Antarctic infrared sky thermal background (particularly within the Kdark near infrared atmospheric window at 2.4 μm) and the long Antarctic nights to provide high sensitivity temporal data from astronomical sources. The data collected from the Kunlun Infrared Sky Survey (KISS) will be used to conduct a range of astronomical science cases including the study of supernovae, exo-planets, variable stars, and the cosmic infrared background
    corecore