12 research outputs found

    Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning

    Full text link
    We introduce Adapters, an open-source library that unifies parameter-efficient and modular transfer learning in large language models. By integrating 10 diverse adapter methods into a unified interface, Adapters offers ease of use and flexible configuration. Our library allows researchers and practitioners to leverage adapter modularity through composition blocks, enabling the design of complex adapter setups. We demonstrate the library's efficacy by evaluating its performance against full fine-tuning on various NLP tasks. Adapters provides a powerful tool for addressing the challenges of conventional fine-tuning paradigms and promoting more efficient and modular transfer learning. The library is available via https://adapterhub.ml/adapters.Comment: EMNLP 2023: Systems Demonstration

    Clinically Valuable Quality Control for PET/MRI Systems:Consensus Recommendation From the HYBRID Consortium

    Get PDF
    International audienceQuality control (QC) of medical imaging devices is essential to ensure their proper function and to gain accurate and quantitative results. Therefore, several international bodies have published QC guidelines and recommendations for a wide range of imaging modalities to ensure adequate performance of the systems. Hybrid imaging systems such as positron emission tomography/computed tomography (PET/CT) or PET/magnetic resonance imaging (PET/MRI), in particular, present additional challenges caused by differences between the combined modalities. However, despite the increasing use of this hybrid imaging modality in recent years, there are no dedicated QC recommendations for PET/MRI. Therefore, this work aims at collecting information on QC procedures across a European PET/MRI network, presenting quality assurance procedures implemented by PET/MRI vendors and achieving a consensus on PET/MRI QC procedures across imaging centers. Users of PET/MRI systems at partner sites involved in the HYBRID consortium were surveyed about local frequencies of QC procedures for PET/MRI. Although all sites indicated that they perform vendor-specific daily QC procedures, significant variations across the centers were observed for other QC tests and testing frequencies. Likewise, variations in available recommendations and guidelines and the QC procedures implemented by vendors were found. Based on the available information and our clinical expertise within this consortium, we were able to propose a minimum set of PET/MRI QC recommendations including the daily QC, cross-calibration tests, and an image quality (IQ) assessment for PET and coil checks and MR image quality tests for MRI. Together with regular checks of the PET-MRI alignment, proper PET/MRI performance can be ensured

    Expra Dorrough Gruppe 3

    No full text
    Replikationsstudie der 2. Studie des Artikels "I May Not Agree With You, but I Trust You: Caring About Social Issues Signals Integrity" von Zlatev (2019) bezüglich des Zusammenhangs zwischen wahrgenommener Vertrauenswürdigkeit einer Person und deren Interesse für sozialen Themen

    AdapterHub: A Framework for Adapting Transformers

    No full text
    The current modus operandi in NLP involves downloading and fine-tuning pre-trained models consisting of hundreds of millions, or even billions of parameters. Storing and sharing such large trained models is expensive, slow, and time-consuming, which impedes progress towards more general and versatile NLP methods that learn from and for many tasks. Adapters-small learnt bottleneck layers inserted within each layer of a pre-trained model- ameliorate this issue by avoiding full fine-tuning of the entire model. However, sharing and integrating adapter layers is not straightforward. We propose AdapterHub, a framework that allows dynamic "stiching-in" of pre-trained adapters for different tasks and languages. The framework, built on top of the popular HuggingFace Transformers library, enables extremely easy and quick adaptations of state-of-the-art pre-trained models (e.g., BERT, RoBERTa, XLM-R) across tasks and languages. Downloading, sharing, and training adapters is as seamless as possible using minimal changes to the training scripts and a specialized infrastructure. Our framework enables scalable and easy access to sharing of task-specific models, particularly in low-resource scenarios. AdapterHub includes all recent adapter architectures and can be found at AdapterHub.ml

    Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning

    No full text
    We introduce Adapters, an open-source library that unifies parameter-efficient and modular transfer learning in large language models. By integrating 10 diverse adapter methods into a unified interface, Adapters offers ease of use and flexible configuration. Our library allows researchers and practitioners to leverage adapter modularity through composition blocks, enabling the design of complex adapter setups. We demonstrate the library’s efficacy by evaluating its performance against full fine-tuning on various NLP tasks. Adapters provides a powerful tool for addressing the challenges of conventional fine-tuning paradigms and promoting more efficient and modular transfer learning. The library is available via https://adapterhub.ml/adapters
    corecore