7 research outputs found

    Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning

    Full text link
    We introduce Adapters, an open-source library that unifies parameter-efficient and modular transfer learning in large language models. By integrating 10 diverse adapter methods into a unified interface, Adapters offers ease of use and flexible configuration. Our library allows researchers and practitioners to leverage adapter modularity through composition blocks, enabling the design of complex adapter setups. We demonstrate the library's efficacy by evaluating its performance against full fine-tuning on various NLP tasks. Adapters provides a powerful tool for addressing the challenges of conventional fine-tuning paradigms and promoting more efficient and modular transfer learning. The library is available via https://adapterhub.ml/adapters.Comment: EMNLP 2023: Systems Demonstration

    Influence of Support Material on the Structural Evolution of Copper during Electrochemical CO2 Reduction

    Get PDF
    The copper-catalyzed electrochemical CO2 reduction reaction represents an elegant pathway to reduce CO2 emissions while producing a wide range of valuable hydrocarbons. The selectivity for these products depends strongly on the structure and morphology of the copper catalyst. However, continued deactivation during catalysis alters the obtained product spectrum. In this work, we report on the stabilizing effect of three different carbon supports with unique pore structures. The influence of pore structure on stability and selectivity was examined by high-angle annular dark field scanning transmission electron microscopy and gas chromatography measurements in a micro-flow cell. Supporting particles into confined space was found to increase the barrier for particle agglomeration during 20 h of chronopotentiometry measurements at 100 mA cm−2 resembling long-term CO2 reduction conditions. We propose a catalyst design preventing coalescence and agglomeration in harsh electrochemical reaction conditions, exemplarily demonstrated for the electrocatalytic CO2 reduction. With this work, we provide important insights into the design of stable CO2 electrocatalysts that can potentially be applied to a wide range of applications

    Tilting or Twisting? Using Anisotropic Solvent Diffusion in Polymeric Thermo-Responsive LLC Phases for the Distinction of Spatial Reorientation versus Inversion of the Helical Polymer Backbone

    No full text
    We have recently investigated lyotropic liquid crystalline (LLC) phases from the class of helically chiral polypeptides, which exhibit temperature-dependent changes in their LLC behavior. Both poly-Îł-p-biphenylmethyl-l-glutamate (PBPMLG) and poly-ÎČ-(benzyl)-co-(ÎČ-phenethyl)-l-aspartate (PBLA-co-PPLA) exhibit a transition at which the polymer rods rearrange from a perpendicular orientation to a parallel orientation with respect to the magnetic field. A characteristic jump of the quadrupolar splittings ΔΜQ of the solvent’s 2H signal is observed in the deuterium spectra. In this work, we present NMR diffusometry in three spatial dimensions as a promising method to obtain information about the spatial arrangement of polymers in LLC phases by making use of the anisotropy of solvent translational diffusion. By correlating ΔΜQ with the diffusion coefficients measured parallelly (D∄) and perpendicularly (D⊄) to the magnetic field, we find a significant change of the D∄/D⊄ ratio at the transition temperature of the LLC phases. This confirms previous results from experiments on deuterated polymers, which showed that a flipping of the alignment director with respect to the magnetic field is the process responsible for the observed transition

    Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning

    No full text
    We introduce Adapters, an open-source library that unifies parameter-efficient and modular transfer learning in large language models. By integrating 10 diverse adapter methods into a unified interface, Adapters offers ease of use and flexible configuration. Our library allows researchers and practitioners to leverage adapter modularity through composition blocks, enabling the design of complex adapter setups. We demonstrate the library’s efficacy by evaluating its performance against full fine-tuning on various NLP tasks. Adapters provides a powerful tool for addressing the challenges of conventional fine-tuning paradigms and promoting more efficient and modular transfer learning. The library is available via https://adapterhub.ml/adapters
    corecore