1,331 research outputs found

    A Semi-Supervised Two-Stage Approach to Learning from Noisy Labels

    Full text link
    The recent success of deep neural networks is powered in part by large-scale well-labeled training data. However, it is a daunting task to laboriously annotate an ImageNet-like dateset. On the contrary, it is fairly convenient, fast, and cheap to collect training images from the Web along with their noisy labels. This signifies the need of alternative approaches to training deep neural networks using such noisy labels. Existing methods tackling this problem either try to identify and correct the wrong labels or reweigh the data terms in the loss function according to the inferred noisy rates. Both strategies inevitably incur errors for some of the data points. In this paper, we contend that it is actually better to ignore the labels of some of the data points than to keep them if the labels are incorrect, especially when the noisy rate is high. After all, the wrong labels could mislead a neural network to a bad local optimum. We suggest a two-stage framework for the learning from noisy labels. In the first stage, we identify a small portion of images from the noisy training set of which the labels are correct with a high probability. The noisy labels of the other images are ignored. In the second stage, we train a deep neural network in a semi-supervised manner. This framework effectively takes advantage of the whole training set and yet only a portion of its labels that are most likely correct. Experiments on three datasets verify the effectiveness of our approach especially when the noisy rate is high

    HetSeq: Distributed GPU Training on Heterogeneous Infrastructure

    Full text link
    Modern deep learning systems like PyTorch and Tensorflow are able to train enormous models with billions (or trillions) of parameters on a distributed infrastructure. These systems require that the internal nodes have the same memory capacity and compute performance. Unfortunately, most organizations, especially universities, have a piecemeal approach to purchasing computer systems resulting in a heterogeneous infrastructure, which cannot be used to compute large models. The present work describes HetSeq, a software package adapted from the popular PyTorch package that provides the capability to train large neural network models on heterogeneous infrastructure. Experiments with transformer translation and BERT language model shows that HetSeq scales over heterogeneous systems. HetSeq can be easily extended to other models like image classification. Package with supported document is publicly available at https://github.com/yifding/hetseq.Comment: 7 pages, 3 tables, 2 figure

    ChatEL: Entity Linking with Chatbots

    Full text link
    Entity Linking (EL) is an essential and challenging task in natural language processing that seeks to link some text representing an entity within a document or sentence with its corresponding entry in a dictionary or knowledge base. Most existing approaches focus on creating elaborate contextual models that look for clues the words surrounding the entity-text to help solve the linking problem. Although these fine-tuned language models tend to work, they can be unwieldy, difficult to train, and do not transfer well to other domains. Fortunately, Large Language Models (LLMs) like GPT provide a highly-advanced solution to the problems inherent in EL models, but simply naive prompts to LLMs do not work well. In the present work, we define ChatEL, which is a three-step framework to prompt LLMs to return accurate results. Overall the ChatEL framework improves the average F1 performance across 10 datasets by more than 2%. Finally, a thorough error analysis shows many instances with the ground truth labels were actually incorrect, and the labels predicted by ChatEL were actually correct. This indicates that the quantitative results presented in this paper may be a conservative estimate of the actual performance. All data and code are available as an open-source package on GitHub at https://github.com/yifding/In_Context_EL

    Multi-modal Domain Adaptation for REG via Relation Transfer

    Full text link
    Domain adaptation, which aims to transfer knowledge between domains, has been well studied in many areas such as image classification and object detection. However, for multi-modal tasks, conventional approaches rely on large-scale pre-training. But due to the difficulty of acquiring multi-modal data, large-scale pre-training is often impractical. Therefore, domain adaptation, which can efficiently utilize the knowledge from different datasets (domains), is crucial for multi-modal tasks. In this paper, we focus on the Referring Expression Grounding (REG) task, which is to localize an image region described by a natural language expression. Specifically, we propose a novel approach to effectively transfer multi-modal knowledge through a specially relation-tailored approach for the REG problem. Our approach tackles the multi-modal domain adaptation problem by simultaneously enriching inter-domain relations and transferring relations between domains. Experiments show that our proposed approach significantly improves the transferability of multi-modal domains and enhances adaptation performance in the REG problem

    An Efficient Siphon-Based Deadlock Prevention Policy for a Class of Generalized Petri Nets

    Get PDF
    We propose a new deadlock prevention policy for an important class of resource allocation systems (RASs) that appear in the modeling of flexible manufacturing systems (FMSs). The model of this class in terms of generalized Petri nets is, namely, S4PR. On the basis of recent structural analysis results related to the elementary siphons in generalized Petri nets on one hand and an efficient deadlock avoidance policy proposed for the class of conjunctive/disjunctive (C/D) RASs on the other hand, we show how one can generate monitors to be added to a net system such that all its strict minimal siphons are max′-controlled and no insufficiently marked siphon is generated. Thereby, a new, simple, and more permissive liveness-enforcing supervisor synthesis method for S4PR is established

    The role of ferroptosis in neurodegenerative diseases

    Get PDF
    Ferroptosis represents an iron− and lipid peroxidation (LPO)-mediated form of regulated cell death (RCD). Recent evidence strongly suggests the involvement of ferroptosis in various neurodegenerative diseases (NDs), particularly Alzheimer’s disease (AD), Parkinson’s disease (PD), Huntington’s disease (HD), multiple sclerosis (MS), and amyotrophic lateral sclerosis (ALS), among others. The treatment of ferroptosis poses both opportunities and challenges in the context of ND. This review provides a comprehensive overview of characteristic features, induction and inhibition of ferroptosis, highlighting the ferroptosis inhibitor and the underlying mechanisms responsible for its occurrence. Moreover, the review explores how these mechanisms contribute to the pathogenesis and progression of major neurodegenerative disorders. Additionally, it presents novel insights into the role of ferroptosis in ND and summarizes recent advancements in the development of therapeutic approaches targeting ferroptosis. These insights and advancements hold potential to guide future strategies aimed at effectively managing these debilitating medical conditions

    Differences and common ground in the frameworks of health-related quality of life in traditional Chinese medicine and modern medicine:a systematic review

    Get PDF
    Purpose: This systematic review aims to explore the conceptualization of health-related quality of life (HRQoL) in China. With HRQoL influenced by both modern medicine (MM) and traditional Chinese medicine (TCM), the study seeks to identify differences and common ground between the frameworks of MM and TCM as defined in the literature. Method: A systematic literature search was conducted across three Chinese databases and four English databases. The data was extracted including title, author(s), publication year, region, aim, method, category, and result. When sorting data, we broke down the HRQoL frameworks into concepts, domains and facets, with a focus on overlapped facets between the frameworks of MM and TCM. Results: A total of 31 studies were included. In the perspective of TCM, HRQoL is centered around three key 'concepts': (1) 'xingshentongyi' (unity of body and spirit), (2) 'tianrenheyi' (harmony between man and nature), and (3) 'qiqing' (seven emotional forms). In contrast, the MM framework comprises 'physical,' 'mental,' 'social,' and 'environment' domains. Out of the 59 unique facets identified, 28 are common to both TCM and MM, 9 specific to TCM, and 22 specific to MM. 'Appetite,' 'sleep,' and 'energy' are the most frequently mentioned facets in both frameworks. Conclusion: The concept of HRQoL in China encompasses frameworks rooted in both TCM and MM. While TCM and MM have distinct healthcare approaches, they share overlapping domains when measuring HRQoL through questionnaires. Furthermore, TCM and MM demonstrate considerable convergence in terms of HRQoL facets, showing the potential for utilizing HRQoL instruments across different cultural settings.</p
    corecore