331 research outputs found

    Colorimetric Detection of Copper Ion Based on Click Chemistry

    Get PDF
    Two colorimetric assays, lateral flow biosensor (LFB) and hemin/G-Quadruplex DNAzyme-based colorimetric assay, were developed for the detection of copper ion based on click chemistry. Two single-strand DNA (ssDNA) with azide- and alkyne-modified at 3′ and 5′ separately can be linked by the Cu+-catalyzed click chemistry. For hemin/G-Quadruplex DNAzyme-based assay, the two ssDNA fragments linked by Cu+-catalyzed click chemistry could form a complete G-rich sequence that severed as a horse-radish peroxidase. In the presence of hemin and K+, the colorless substrate tetramethyl benzidine (TMB) is catalyzed into a colored product by the G-rich sequence. The concentration of Cu2+ can then be quantitatively analyzed by measuring the color density. For the LFB assay, the two ligated ssDNA fragments could form a sandwich complex between an ssDNA fragment immobilized on gold nanoparticles and another ssDNA fragment on test zone of a biosensor, respectively. The biosensor enables visual detection of copper ion with excellent specificity. In comparison with conventional methods, the present assays are simpler to operate and more cost-effective to use, and so have great potential in point-of-care diagnosis and environmental monitoring

    A Supervised STDP-based Training Algorithm for Living Neural Networks

    Full text link
    Neural networks have shown great potential in many applications like speech recognition, drug discovery, image classification, and object detection. Neural network models are inspired by biological neural networks, but they are optimized to perform machine learning tasks on digital computers. The proposed work explores the possibilities of using living neural networks in vitro as basic computational elements for machine learning applications. A new supervised STDP-based learning algorithm is proposed in this work, which considers neuron engineering constrains. A 74.7% accuracy is achieved on the MNIST benchmark for handwritten digit recognition.Comment: 5 pages, 3 figures, Accepted by ICASSP 201

    Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning

    Full text link
    The popularity of LLaMA (Touvron et al., 2023a;b) and other recently emerged moderate-sized large language models (LLMs) highlights the potential of building smaller yet powerful LLMs. Regardless, the cost of training such models from scratch on trillions of tokens remains high. In this work, we study structured pruning as an effective means to develop smaller LLMs from pre-trained, larger models. Our approach employs two key techniques: (1) targeted structured pruning, which prunes a larger model to a specified target shape by removing layers, heads, and intermediate and hidden dimensions in an end-to-end manner, and (2) dynamic batch loading, which dynamically updates the composition of sampled data in each training batch based on varying losses across different domains. We demonstrate the efficacy of our approach by presenting the Sheared-LLaMA series, pruning the LLaMA2-7B model down to 1.3B and 2.7B parameters. Sheared-LLaMA models outperform state-of-the-art open-source models of equivalent sizes, such as Pythia, INCITE, and OpenLLaMA models, on a wide range of downstream and instruction tuning evaluations, while requiring only 3% of compute compared to training such models from scratch. This work provides compelling evidence that leveraging existing LLMs with structured pruning is a far more cost-effective approach for building smaller LLMs.Comment: The code and models are available at https://github.com/princeton-nlp/LLM-Shearin

    Emergent Modularity in Pre-trained Transformers

    Full text link
    This work examines the presence of modularity in pre-trained Transformers, a feature commonly found in human brains and thought to be vital for general intelligence. In analogy to human brains, we consider two main characteristics of modularity: (1) functional specialization of neurons: we evaluate whether each neuron is mainly specialized in a certain function, and find that the answer is yes. (2) function-based neuron grouping: we explore finding a structure that groups neurons into modules by function, and each module works for its corresponding function. Given the enormous amount of possible structures, we focus on Mixture-of-Experts as a promising candidate, which partitions neurons into experts and usually activates different experts for different inputs. Experimental results show that there are functional experts, where clustered are the neurons specialized in a certain function. Moreover, perturbing the activations of functional experts significantly affects the corresponding function. Finally, we study how modularity emerges during pre-training, and find that the modular structure is stabilized at the early stage, which is faster than neuron stabilization. It suggests that Transformers first construct the modular structure and then learn fine-grained neuron functions. Our code and data are available at https://github.com/THUNLP/modularity-analysis.Comment: Findings of ACL 202

    Experimental Tests of DC SFCL under Low Impedance and High Impedance Fault Conditions

    Get PDF
    DC system protection is more challenging than that for AC system due to the rapid rate of rise of the fault current and absence of natural current zero-crossing in DC systems. Superconducting fault current limiter (SFCL) in DC systems is a promising technology to reduce the fault current level and the rate of rise of the fault current, and also SFCLs have no resistance during normal operation. In this paper, the behaviors of an SFCL coil are investigated under both low impedance and high impedance fault conditions in DC systems. In the low impedance fault condition system, the SFCL coil performs effective limitation of the fault current level under different prospective fault current levels. The application of SFCLs with limited inductance in the DC system can be a potential solution to effectively suppress the fault current under low impedance short-circuit faults. The SFCL coil under the high impedance fault condition can only limit the prospective fault current when it is much higher than the critical current of the coil

    Discovery of multiple lead compounds as M2 inhibitors through the focused screening of a small primary amine library.

    Get PDF
    The discovery of new anti-influenza drugs is urgent, particularly considering the recent threat of swine flu. In this study, the influenza virus M2 protein was expressed in HEK293 cells and shown to have selective ion channel activity for monovalent ions. The anti-influenza virus drug amantadine hydrochloride significantly attenuated the inward current induced by hyperpolarization of HEK293 cell membranes. Although adamantine derivatives are the only M2 drugs for influenza virus A, their use is limited in the US due to drug resistance. Here we report the discovery of multiple M2 inhibitor lead compounds that were rapidly generated through focused screening of a small primary amine library. The screen was designed using a scaffold-hopping strategy based on amantadine. This study suggests that an antiviral compound directed against a conserved motif may be more useful than amantadine in inhibiting viral replication

    Plug-and-Play Knowledge Injection for Pre-trained Language Models

    Full text link
    Injecting external knowledge can improve the performance of pre-trained language models (PLMs) on various downstream NLP tasks. However, massive retraining is required to deploy new knowledge injection methods or knowledge bases for downstream tasks. In this work, we are the first to study how to improve the flexibility and efficiency of knowledge injection by reusing existing downstream models. To this end, we explore a new paradigm plug-and-play knowledge injection, where knowledge bases are injected into frozen existing downstream models by a knowledge plugin. Correspondingly, we propose a plug-and-play injection method map-tuning, which trains a mapping of knowledge embeddings to enrich model inputs with mapped embeddings while keeping model parameters frozen. Experimental results on three knowledge-driven NLP tasks show that existing injection methods are not suitable for the new paradigm, while map-tuning effectively improves the performance of downstream models. Moreover, we show that a frozen downstream model can be well adapted to different domains with different mapping networks of domain knowledge. Our code and models are available at https://github.com/THUNLP/Knowledge-Plugin.Comment: ACL 202
    • …
    corecore