147 research outputs found

    Shear Flow Induced Alignment of Carbon Nanotubes in Natural Rubber

    Get PDF
    A new procedure for the fabrication of natural rubber composite with aligned carbon nanotubes is provided in this study. The two-step approach is based on (i) the preparation of mixture latex of natural rubber, multiwalled carbon nanotubes, and other components and (ii) the orientation of carbon nanotubes by a flow field. Rubber composite sheets filled with variable volume fraction of aligned carbon nanotubes were fabricated and then confirmed by transmission electron microscopy and Raman spectroscopy studies. An obvious increase in thermal conductivity has been obtained after the alignment of carbon nanotubes. The dynamic mechanical analysis was carried out in a tear mode for the composite

    Meteorological applications of precipitable water vapor measurements retrieved by the national GNSS network of China

    Get PDF
    AbstractIn this study, the Global Navigation Satellite System (GNSS) network of China is discussed, which can be used to monitor atmospheric precipitable water vapor (PWV). By the end of 2013, the network had 952 GNSS sites, including 260 belonging to the Crustal Movement Observation Network of China (CMONOC) and 692 belonging to the China Meteorological Administration GNSS network (CMAGN). Additionally, GNSS observation collecting and data processing procedures are presented and PWV data quality control methods are investigated. PWV levels as determined by GNSS and radiosonde are compared. The results show that GNSS estimates are generally in good agreement with measurements of radiosondes and water vapor radiometers (WVR). The PWV retrieved by the national GNSS network is used in weather forecasting, assimilation of data into numerical weather prediction models, the validation of PWV estimates by radiosonde, and plum rain monitoring. The network is also used to monitor the total ionospheric electron content

    Does the time-oriented tendency embedded in language affect corporate income smoothing? Cross-country evidence

    Get PDF
    We examine whether and how the time-oriented tendency embedded in languages influences income smoothing. Separating languages into weak- versus strong-future time reference (FTR) groups, we find that firms in weak-FTR countries tend to smooth earnings more. We also find that relationships with major stakeholders (i.e., debtholders, suppliers, and employees) amplify the effect of the FTR of languages on income smoothing. Additional analyses suggest that income smoothing driven by the FTR of languages enhances earnings informativeness. These findings provide new insights on the role that language plays in financial reporting decisions and on how relationships with major stakeholders influence the relation between an important feature of language and corporate income smoothing behavior

    Learning Robust Representations for Continual Relation Extraction via Adversarial Class Augmentation

    Full text link
    Continual relation extraction (CRE) aims to continually learn new relations from a class-incremental data stream. CRE model usually suffers from catastrophic forgetting problem, i.e., the performance of old relations seriously degrades when the model learns new relations. Most previous work attributes catastrophic forgetting to the corruption of the learned representations as new relations come, with an implicit assumption that the CRE models have adequately learned the old relations. In this paper, through empirical studies we argue that this assumption may not hold, and an important reason for catastrophic forgetting is that the learned representations do not have good robustness against the appearance of analogous relations in the subsequent learning process. To address this issue, we encourage the model to learn more precise and robust representations through a simple yet effective adversarial class augmentation mechanism (ACA), which is easy to implement and model-agnostic. Experimental results show that ACA can consistently improve the performance of state-of-the-art CRE models on two popular benchmarks.Comment: Accepted by EMNLP 202

    Bi-Drop: Enhancing Fine-tuning Generalization via Synchronous sub-net Estimation and Optimization

    Full text link
    Pretrained language models have achieved remarkable success in natural language understanding. However, fine-tuning pretrained models on limited training data tends to overfit and thus diminish performance. This paper presents Bi-Drop, a fine-tuning strategy that selectively updates model parameters using gradients from various sub-nets dynamically generated by dropout. The sub-net estimation of Bi-Drop is performed in an in-batch manner, so it overcomes the problem of hysteresis in sub-net updating, which is possessed by previous methods that perform asynchronous sub-net estimation. Also, Bi-Drop needs only one mini-batch to estimate the sub-net so it achieves higher utility of training data. Experiments on the GLUE benchmark demonstrate that Bi-Drop consistently outperforms previous fine-tuning methods. Furthermore, empirical results also show that Bi-Drop exhibits excellent generalization ability and robustness for domain transfer, data imbalance, and low-resource scenarios.Comment: EMNLP 2023 Findings. Camera-ready version. Co-first authors with equal contribution

    Making Large Language Models Better Reasoners with Alignment

    Full text link
    Reasoning is a cognitive process of using evidence to reach a sound conclusion. The reasoning capability is essential for large language models (LLMs) to serve as the brain of the artificial general intelligence agent. Recent studies reveal that fine-tuning LLMs on data with the chain of thought (COT) reasoning process can significantly enhance their reasoning capabilities. However, we find that the fine-tuned LLMs suffer from an \textit{Assessment Misalignment} problem, i.e., they frequently assign higher scores to subpar COTs, leading to potential limitations in their reasoning abilities. To address this problem, we introduce an \textit{Alignment Fine-Tuning (AFT)} paradigm, which involves three steps: 1) fine-tuning LLMs with COT training data; 2) generating multiple COT responses for each question, and categorizing them into positive and negative ones based on whether they achieve the correct answer; 3) calibrating the scores of positive and negative responses given by LLMs with a novel constraint alignment loss. Specifically, the constraint alignment loss has two objectives: a) Alignment, which guarantees that positive scores surpass negative scores to encourage answers with high-quality COTs; b) Constraint, which keeps the negative scores confined to a reasonable range to prevent the model degradation. Beyond just the binary positive and negative feedback, the constraint alignment loss can be seamlessly adapted to the ranking situations when ranking feedback is accessible. Furthermore, we also delve deeply into recent ranking-based alignment methods, such as DPO, RRHF, and PRO, and discover that the constraint, which has been overlooked by these approaches, is also crucial for their performance. Extensive experiments on four reasoning benchmarks with both binary and ranking feedback demonstrate the effectiveness of AFT.Comment: Large Language Models; Reasoning; Alignmen
    corecore