17 research outputs found

    Leveraging Diffusion-Based Image Variations for Robust Training on Poisoned Data

    Full text link
    Backdoor attacks pose a serious security threat for training neural networks as they surreptitiously introduce hidden functionalities into a model. Such backdoors remain silent during inference on clean inputs, evading detection due to inconspicuous behavior. However, once a specific trigger pattern appears in the input data, the backdoor activates, causing the model to execute its concealed function. Detecting such poisoned samples within vast datasets is virtually impossible through manual inspection. To address this challenge, we propose a novel approach that enables model training on potentially poisoned datasets by utilizing the power of recent diffusion models. Specifically, we create synthetic variations of all training samples, leveraging the inherent resilience of diffusion models to potential trigger patterns in the data. By combining this generative approach with knowledge distillation, we produce student models that maintain their general performance on the task while exhibiting robust resistance to backdoor triggers.Comment: 11 pages, 3 tables, 2 figure

    Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning

    Full text link
    We introduce Adapters, an open-source library that unifies parameter-efficient and modular transfer learning in large language models. By integrating 10 diverse adapter methods into a unified interface, Adapters offers ease of use and flexible configuration. Our library allows researchers and practitioners to leverage adapter modularity through composition blocks, enabling the design of complex adapter setups. We demonstrate the library's efficacy by evaluating its performance against full fine-tuning on various NLP tasks. Adapters provides a powerful tool for addressing the challenges of conventional fine-tuning paradigms and promoting more efficient and modular transfer learning. The library is available via https://adapterhub.ml/adapters.Comment: EMNLP 2023: Systems Demonstration

    Reduction in saturated fat intake for cardiovascular disease

    Get PDF
    BACKGROUND: Reducing saturated fat reduces serum cholesterol, but effects on other intermediate outcomes may be less clear. Additionally, it is unclear whether the energy from saturated fats eliminated from the diet are more helpfully replaced by polyunsaturated fats, monounsaturated fats, carbohydrate or protein. OBJECTIVES: To assess the effect of reducing saturated fat intake and replacing it with carbohydrate (CHO), polyunsaturated (PUFA), monounsaturated fat (MUFA) and/or protein on mortality and cardiovascular morbidity, using all available randomised clinical trials. SEARCH METHODS: We updated our searches of the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (Ovid) and Embase (Ovid) on 15 October 2019, and searched Clinicaltrials.gov and WHO International Clinical Trials Registry Platform (ICTRP) on 17 October 2019. SELECTION CRITERIA: Included trials fulfilled the following criteria: 1) randomised; 2) intention to reduce saturated fat intake OR intention to alter dietary fats and achieving a reduction in saturated fat; 3) compared with higher saturated fat intake or usual diet; 4) not multifactorial; 5) in adult humans with or without cardiovascular disease (but not acutely ill, pregnant or breastfeeding); 6) intervention duration at least 24 months; 7) mortality or cardiovascular morbidity data available. DATA COLLECTION AND ANALYSIS: Two review authors independently assessed inclusion, extracted study data and assessed risk of bias. We performed random-effects meta-analyses, meta-regression, subgrouping, sensitivity analyses, funnel plots and GRADE assessment. MAIN RESULTS: We included 15 randomised controlled trials (RCTs) (16 comparisons, ~59,000 participants), that used a variety of interventions from providing all food to advice on reducing saturated fat. The included long-term trials suggested that reducing dietary saturated fat reduced the risk of combined cardiovascular events by 21% (risk ratio (RR) 0.79; 95% confidence interval (CI) 0.66 to 0.93, 11 trials, 53,300 participants of whom 8% had a cardiovascular event, I² = 65%, GRADE moderate-quality evidence). Meta-regression suggested that greater reductions in saturated fat (reflected in greater reductions in serum cholesterol) resulted in greater reductions in risk of CVD events, explaining most heterogeneity between trials. The number needed to treat for an additional beneficial outcome (NNTB) was 56 in primary prevention trials, so 56 people need to reduce their saturated fat intake for ~four years for one person to avoid experiencing a CVD event. In secondary prevention trials, the NNTB was 32. Subgrouping did not suggest significant differences between replacement of saturated fat calories with polyunsaturated fat or carbohydrate, and data on replacement with monounsaturated fat and protein was very limited. We found little or no effect of reducing saturated fat on all-cause mortality (RR 0.96; 95% CI 0.90 to 1.03; 11 trials, 55,858 participants) or cardiovascular mortality (RR 0.95; 95% CI 0.80 to 1.12, 10 trials, 53,421 participants), both with GRADE moderate-quality evidence. There was little or no effect of reducing saturated fats on non-fatal myocardial infarction (RR 0.97, 95% CI 0.87 to 1.07) or CHD mortality (RR 0.97, 95% CI 0.82 to 1.16, both low-quality evidence), but effects on total (fatal or non-fatal) myocardial infarction, stroke and CHD events (fatal or non-fatal) were all unclear as the evidence was of very low quality. There was little or no effect on cancer mortality, cancer diagnoses, diabetes diagnosis, HDL cholesterol, serum triglycerides or blood pressure, and small reductions in weight, serum total cholesterol, LDL cholesterol and BMI. There was no evidence of harmful effects of reducing saturated fat intakes. AUTHORS' CONCLUSIONS: The findings of this updated review suggest that reducing saturated fat intake for at least two years causes a potentially important reduction in combined cardiovascular events. Replacing the energy from saturated fat with polyunsaturated fat or carbohydrate appear to be useful strategies, while effects of replacement with monounsaturated fat are unclear. The reduction in combined cardiovascular events resulting from reducing saturated fat did not alter by study duration, sex or baseline level of cardiovascular risk, but greater reduction in saturated fat caused greater reductions in cardiovascular events

    What to Pre-Train on? Efficient Intermediate Task Selection

    No full text

    AdapterHub: A Framework for Adapting Transformers

    No full text
    The current modus operandi in NLP involves downloading and fine-tuning pre-trained models consisting of hundreds of millions, or even billions of parameters. Storing and sharing such large trained models is expensive, slow, and time-consuming, which impedes progress towards more general and versatile NLP methods that learn from and for many tasks. Adapters-small learnt bottleneck layers inserted within each layer of a pre-trained model- ameliorate this issue by avoiding full fine-tuning of the entire model. However, sharing and integrating adapter layers is not straightforward. We propose AdapterHub, a framework that allows dynamic "stiching-in" of pre-trained adapters for different tasks and languages. The framework, built on top of the popular HuggingFace Transformers library, enables extremely easy and quick adaptations of state-of-the-art pre-trained models (e.g., BERT, RoBERTa, XLM-R) across tasks and languages. Downloading, sharing, and training adapters is as seamless as possible using minimal changes to the training scripts and a specialized infrastructure. Our framework enables scalable and easy access to sharing of task-specific models, particularly in low-resource scenarios. AdapterHub includes all recent adapter architectures and can be found at AdapterHub.ml

    Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning

    No full text
    We introduce Adapters, an open-source library that unifies parameter-efficient and modular transfer learning in large language models. By integrating 10 diverse adapter methods into a unified interface, Adapters offers ease of use and flexible configuration. Our library allows researchers and practitioners to leverage adapter modularity through composition blocks, enabling the design of complex adapter setups. We demonstrate the library’s efficacy by evaluating its performance against full fine-tuning on various NLP tasks. Adapters provides a powerful tool for addressing the challenges of conventional fine-tuning paradigms and promoting more efficient and modular transfer learning. The library is available via https://adapterhub.ml/adapters

    UKP-SQUARE: An Online Platform for Question Answering Research

    No full text
    Recent advances in NLP and information retrieval have given rise to a diverse set of question answering tasks that are of different formats (e.g., extractive, abstractive), require different model architectures (e.g., generative, discriminative), and setups (e.g., with or without retrieval). Despite having a large number of powerful, specialized QA pipelines (which we refer to as Skills) that consider a single domain, model or setup, there exists no framework where users can easily explore and compare such pipelines and can extend them according to their needs. To address this issue, we present UKP-SQUARE, an extensible online QA platform for researchers which allows users to query and analyze a large collection of modern Skills via a user-friendly web interface and integrated behavioural tests. In addition, QA researchers can develop, manage, and share their custom Skills using our microservices that support a wide range of models (Transformers, Adapters, ONNX), datastores and retrieval techniques (e.g., sparse and dense). UKP-SQUARE is available on https://square.ukp-lab.de.Comment: Accepted at ACL 2022 Demo Trac
    corecore