234 research outputs found
Recommended from our members
Distinct Surface and Bulk Thermal Behaviors of LiNi0.6Mn0.2Co0.2O2 Cathode Materials as a Function of State of Charge.
Understanding how structural and chemical transformations take place in particles under thermal conditions can inform designing thermally robust electrode materials. Such a study necessitates the use of diagnostic techniques that are capable of probing the transformations at multiple length scales and at different states of charge (SOC). In this study, the thermal behavior of LiNi0.6Mn0.2Co0.2O2 (NMC-622) was examined as a function of SOC, using an array of bulk and surface-sensitive techniques. In general, thermal stability decreases as lithium content is lowered and conversion in the bulk to progressively reduced metal oxides (spinels, rock salt) occurs as the temperature is raised. Hard X-ray absorption spectroscopy (XAS) and X-ray Raman spectroscopy (XRS) experiments, which probe the bulk, reveal that Ni and Co are eventually reduced when partially delithiated samples (regardless of the SOC) are heated, although Mn is not. Surface-sensitive synchrotron techniques, such as soft XAS and transmission X-ray microscopy (TXM), however, reveal that for 50% delithiated samples, apparent oxidation of nickel occurs at particle surfaces under some circumstances. This is partially compensated by reduction of cobalt but may also be a consequence of redistribution of lithium ions upon heating. TXM results indicate the movement of reduced nickel ions into particle interiors or oxidized nickel ions to the surface or both. These experiments illustrate the complexity of the thermal behavior of NMC cathode materials. The study also informs the importance of investigating the surface and bulk difference as a function of SOC when studying the thermal behaviors of battery materials
Improving Translation Faithfulness of Large Language Models via Augmenting Instructions
Large Language Models (LLMs) present strong general capabilities, and a
current compelling challenge is stimulating their specialized capabilities,
such as machine translation, through low-cost instruction tuning. The standard
instruction-following data is sequentially organized as the concatenation of an
instruction, an input, and a response. As the attention mechanism of LLMs has
limitations on local focus, LLMs tend to focus more on the words or sentences
nearby at each position. This leads to a high risk of instruction forgetting
during decoding. To alleviate the above issues, We propose SWIE
(Segment-Weighted Instruction Embedding) and an instruction-following dataset
OVERMISS. SWIE improves the model instruction understanding by adding a global
instruction representation on the following input and response representations.
OVERMISS improves model faithfulness by comparing over-translation and
miss-translation results with the correct translation. We apply our methods to
two main-stream open-source LLMs, BLOOM and LLaMA. The experimental results
demonstrate significant improvements in translation performance with SWIE based
on BLOOMZ-3b, particularly in zero-shot and long text translations due to
reduced instruction forgetting risk. Additionally, OVERMISS outperforms the
baseline in translation performance (e.g. an increase in BLEU scores from 0.69
to 3.12 and an average improvement of 0.48 percentage comet scores for
LLaMA-7b) with further enhancements seen in models combining OVERMISS and SWIE
(e.g. the BLUE scores increase up to 0.56 from English to German across three
different backbones), and both exhibit improvements in the faithfulness metric
based on word alignment.Comment: Our code and datasets are released in Github:
https://github.com/pppa2019/swie_overmiss_llm4m
Faster Depth-Adaptive Transformers
Depth-adaptive neural networks can dynamically adjust depths according to the
hardness of input words, and thus improve efficiency. The main challenge is how
to measure such hardness and decide the required depths (i.e., layers) to
conduct. Previous works generally build a halting unit to decide whether the
computation should continue or stop at each layer. As there is no specific
supervision of depth selection, the halting unit may be under-optimized and
inaccurate, which results in suboptimal and unstable performance when modeling
sentences. In this paper, we get rid of the halting unit and estimate the
required depths in advance, which yields a faster depth-adaptive model.
Specifically, two approaches are proposed to explicitly measure the hardness of
input words and estimate corresponding adaptive depth, namely 1) mutual
information (MI) based estimation and 2) reconstruction loss based estimation.
We conduct experiments on the text classification task with 24 datasets in
various sizes and domains. Results confirm that our approaches can speed up the
vanilla Transformer (up to 7x) while preserving high accuracy. Moreover,
efficiency and robustness are significantly improved when compared with other
depth-adaptive approaches.Comment: AAAI-2021. Code will appear at:
https://github.com/Adaxry/Adaptive-Transforme
- …