101 research outputs found

    Cross-layer similarity knowledge distillation for speech enhancement

    Get PDF
    Speech enhancement (SE) algorithms based on deep neural networks (DNNs) often encounter challenges of limited hardware resources or strict latency requirements when deployed in real-world scenarios. However, a strong enhancement effect typically requires a large DNN. In this paper, a knowledge distillation framework for SE is proposed to compress the DNN model. We study the strategy of cross-layer connection paths, which fuses multi-level information from the teacher and transfers it to the student. To adapt to the SE task, we propose a frame-level similarity distillation loss. We apply this method to the deep complex convolution recurrent network (DCCRN) and make targeted adjustments. Experimental results show that the proposed method considerably improves the enhancement effect of the compressed DNN and outperforms other distillation methods

    Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias

    Full text link
    Large language models (LLMs) have been recently leveraged as training data generators for various natural language processing (NLP) tasks. While previous research has explored different approaches to training models using generated data, they generally rely on simple class-conditional prompts, which may limit the diversity of the generated data and inherit systematic biases of LLM. Thus, we investigate training data generation with diversely attributed prompts (e.g., specifying attributes like length and style), which have the potential to yield diverse and attributed generated data. Our investigation focuses on datasets with high cardinality and diverse domains, wherein we demonstrate that attributed prompts outperform simple class-conditional prompts in terms of the resulting model's performance. Additionally, we present a comprehensive empirical study on data generation encompassing vital aspects like bias, diversity, and efficiency, and highlight three key observations: firstly, synthetic datasets generated by simple prompts exhibit significant biases, such as regional bias; secondly, attribute diversity plays a pivotal role in enhancing model performance; lastly, attributed prompts achieve the performance of simple class-conditional prompts while utilizing only 5\% of the querying cost of ChatGPT associated with the latter. We release the generated dataset and used prompts to facilitate future research. The data and code will be available on \url{https://github.com/yueyu1030/AttrPrompt}.Comment: Work in progress. A shorter version is accepted to the ICML DMLR worksho

    Explanation-aware Soft Ensemble Empowers Large Language Model In-context Learning

    Full text link
    Large language models (LLMs) have shown remarkable capabilities in various natural language understanding tasks. With only a few demonstration examples, these LLMs can quickly adapt to target tasks without expensive gradient updates. Common strategies to boost such 'in-context' learning ability are to ensemble multiple model decoded results and require the model to generate an explanation along with the prediction. However, these models often treat different class predictions equally and neglect the potential discrepancy between the explanations and predictions. To fully unleash the power of explanations, we propose EASE, an Explanation-Aware Soft Ensemble framework to empower in-context learning with LLMs. We design two techniques, explanation-guided ensemble, and soft probability aggregation, to mitigate the effect of unreliable explanations and improve the consistency between explanations and final predictions. Experiments on seven natural language understanding tasks and four varying-size LLMs demonstrate the effectiveness of our proposed framework

    Research Progress on Survival Mechanism and Control Measures of Salmonella enterica Serovar Enteritidis in Egg White

    Get PDF
    Salmonella is one of the most common pathogens causing foodborne diseases. Eggs and egg products are important food vehicles for its transmission. Among the many serotypes of Salmonella, S. enterica serovar Enteritidis has a unique advantage in surviving egg white because of its resist to antibacterial molecules in egg white, which can lead to food poisoning. In recent years, the survival strategies of S. enteritidis serovar Enteritidis in egg white have been explored by using molecular biological techniques such as transposon mutations, in vivo expression, high-throughput sequencing and omics, and some key metabolic pathways and stress resistance-related genes/proteins have been discovered. However, the function of stress resistance-related genes has not been fully revealed, and there is a lack of a comprehensive summary of the existing research. Therefore, the current situation and transmission routes of Salmonella contaminated eggs are briefly introduced in this review. Furthermore, the latest progress in research on the survival mechanism of S. enteritidis serovar Enteritidis in egg white is summarized from the perspectives of nutrient availability, membrane stress response, deoxyribonucleic acid (DNA) damage repair, alkaline pH adaptation, osmotic stress response and energy metabolism. Finally, the biological control methods for Salmonella are summarized, including vaccines, bacteriophages, and probiotics. Meanwhile, future research directions are discussed. This article will provide an important reference for effective control of Salmonella in eggs and egg products

    LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model

    Full text link
    How to efficiently transform large language models (LLMs) into instruction followers is recently a popular research direction, while training LLM for multi-modal reasoning remains less explored. Although the recent LLaMA-Adapter demonstrates the potential to handle visual inputs with LLMs, it still cannot generalize well to open-ended visual instructions and lags behind GPT-4. In this paper, we present LLaMA-Adapter V2, a parameter-efficient visual instruction model. Specifically, we first augment LLaMA-Adapter by unlocking more learnable parameters (e.g., norm, bias and scale), which distribute the instruction-following ability across the entire LLaMA model besides adapters. Secondly, we propose an early fusion strategy to feed visual tokens only into the early LLM layers, contributing to better visual knowledge incorporation. Thirdly, a joint training paradigm of image-text pairs and instruction-following data is introduced by optimizing disjoint groups of learnable parameters. This strategy effectively alleviates the interference between the two tasks of image-text alignment and instruction following and achieves strong multi-modal reasoning with only a small-scale image-text and instruction dataset. During inference, we incorporate additional expert models (e.g. captioning/OCR systems) into LLaMA-Adapter to further enhance its image understanding capability without incurring training costs. Compared to the original LLaMA-Adapter, our LLaMA-Adapter V2 can perform open-ended multi-modal instructions by merely introducing 14M parameters over LLaMA. The newly designed framework also exhibits stronger language-only instruction-following capabilities and even excels in chat interactions. Our code and models are available at https://github.com/ZrrSkywalker/LLaMA-Adapter.Comment: Code and models are available at https://github.com/ZrrSkywalker/LLaMA-Adapte

    Topography and structural diversity regulate ecosystem multifunctionality in a subtropical evergreen broad-leaved forest

    Get PDF
    Forest functionality is generally considered a byproduct of forest diversity. Perhaps unsurprisingly, many researchers associate increasing multi-functionality with increasing diversity. Diversity, however, is an often-overused word that may describe a host of features, including the diversity of species, functional trait and structure. Furthermore, variable environmental features (such as topography) influence the interaction between forest plants and their function. Incorporating complex topography (like that associated with tropical and subtropical forests) into estimates of forest functionality is challenging and highly uncertain. In this paper, we applied structural equation models to disentangle the relative importance of topography and different components of what might be considered “plant diversity” to forest multifunctionality using repeated census of a 20-ha subtropical forest plot. We found that multifunctionality was principally influenced by structural diversity more so than either species composition or functional trait diversity. In our SEM model approach, we observed variations in topography could account for about 30% of variation in multifunctionality. Furthermore, variations in topography could indirectly influence forest multifunctionality by changing species composition, functional trait diversity, and structural diversity. Our work highlights the importance of topography and forest structure in regulating subtropical forest multifunctionality on the local scale. This suggests future subtropical forest management should focus on regulating forest structure. Namely, our results suggest land managers must take topography (and the complex interaction between topography and plant diversity) into account in order to build robust and multifunctional forests

    On What Basis? Predicting Text Preference Via Structured Comparative Reasoning

    Full text link
    Comparative reasoning plays a crucial role in text preference prediction; however, large language models (LLMs) often demonstrate inconsistencies in their reasoning. While approaches like Chain-of-Thought improve accuracy in many other settings, they struggle to consistently distinguish the similarities and differences of complex texts. We introduce SC, a prompting approach that predicts text preferences by generating structured intermediate comparisons. SC begins by proposing aspects of comparison, followed by generating textual comparisons under each aspect. We select consistent comparisons with a pairwise consistency comparator that ensures each aspect's comparisons clearly distinguish differences between texts, significantly reducing hallucination and improving consistency. Our comprehensive evaluations across various NLP tasks, including summarization, retrieval, and automatic rating, demonstrate that SC equips LLMs to achieve state-of-the-art performance in text preference prediction

    Current Status and Prospects of Polymer Powder 3D Printing Technologies

    No full text
    3D printing technology, which greatly simplifies the manufacturing of complex parts by a two-dimensional layer-upon-layer process, has flourished in recent years. As one of the most advanced technology, polymer powder 3D printing has many advantages such as high materials utilization rate, free of support structure, great design freedom, and large available materials, which has shown great potential and prospects in various industry applications. With the launch of the Multi jet Fusion system from HP, polymer powder 3D printing has been attracting more attention from industries and researchers. In this work, a comprehensive review of the main polymer powder-based 3D printing methods including binder jetting, selective laser sintering, high-speed sintering were carried out. Their forming mechanism, advantages and drawbacks, materials, and developments were presented, compared, and discussed respectively. In addition, this paper also gives suggestions on the process selection by comparing typical equipment parameters and features of each technology
    corecore