930 research outputs found

    Recent Advances on Sorting Methods of High-Throughput Droplet-Based Microfluidics in Enzyme Directed Evolution

    Get PDF
    Droplet-based microfluidics has been widely applied in enzyme directed evolution (DE), in either cell or cell-free system, due to its low cost and high throughput. As the isolation principles are based on the labeled or label-free characteristics in the droplets, sorting method contributes mostly to the efficiency of the whole system. Fluorescence-activated droplet sorting (FADS) is the mostly applied labeled method but faces challenges of target enzyme scope. Label-free sorting methods show potential to greatly broaden the microfluidic application range. Here, we review the developments of droplet sorting methods through a comprehensive literature survey, including labeled detections [FADS and absorbance-activated droplet sorting (AADS)] and label-free detections [electrochemical-based droplet sorting (ECDS), mass-activated droplet sorting (MADS), Raman-activated droplet sorting (RADS), and nuclear magnetic resonance-based droplet sorting (NMR-DS)]. We highlight recent cases in the last 5 years in which novel enzymes or highly efficient variants are generated by microfluidic DE. In addition, the advantages and challenges of different sorting methods are briefly discussed to provide an outlook for future applications in enzyme DE

    EGTSyn: Edge-based Graph Transformer for Anti-Cancer Drug Combination Synergy Prediction

    Full text link
    Combination therapy with multiple drugs is a potent therapy strategy for complex diseases such as cancer, due to its therapeutic efficacy and potential for reducing side effects. However, the extensive search space of drug combinations makes it challenging to screen all combinations experimentally. To address this issue, computational methods have been developed to identify prioritized drug combinations. Recently, Convolutional Neural Networks based deep learning methods have shown great potential in this community. Although the significant progress has been achieved by existing computational models, they have overlooked the important high-level semantic information and significant chemical bond features of drugs. It is worth noting that such information is rich and it can be represented by the edges of graphs in drug combination predictions. In this work, we propose a novel Edge-based Graph Transformer, named EGTSyn, for effective anti-cancer drug combination synergy prediction. In EGTSyn, a special Edge-based Graph Neural Network (EGNN) is designed to capture the global structural information of chemicals and the important information of chemical bonds, which have been neglected by most previous studies. Furthermore, we design a Graph Transformer for drugs (GTD) that combines the EGNN module with a Transformer-architecture encoder to extract high-level semantic information of drugs.Comment: 15 pages,4 figures,6 table

    READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input Noises

    Full text link
    For many real-world applications, the user-generated inputs usually contain various noises due to speech recognition errors caused by linguistic variations1 or typographical errors (typos). Thus, it is crucial to test model performance on data with realistic input noises to ensure robustness and fairness. However, little study has been done to construct such benchmarks for Chinese, where various language-specific input noises happen in the real world. In order to fill this important gap, we construct READIN: a Chinese multi-task benchmark with REalistic And Diverse Input Noises. READIN contains four diverse tasks and requests annotators to re-enter the original test data with two commonly used Chinese input methods: Pinyin input and speech input. We designed our annotation pipeline to maximize diversity, for example by instructing the annotators to use diverse input method editors (IMEs) for keyboard noises and recruiting speakers from diverse dialectical groups for speech noises. We experiment with a series of strong pretrained language models as well as robust training methods, we find that these models often suffer significant performance drops on READIN even with robustness methods like data augmentation. As the first large-scale attempt in creating a benchmark with noises geared towards user-generated inputs, we believe that READIN serves as an important complement to existing Chinese NLP benchmarks. The source code and dataset can be obtained from https://github.com/thunlp/READIN.Comment: Preprin
    • …
    corecore