1,032 research outputs found
Observation of interlayer phonon modes in van der Waals heterostructures
We have investigated the vibrational properties of van der Waals
heterostructures of monolayer transition metal dichalcogenides (TMDs),
specifically MoS2/WSe2 and MoSe2/MoS2 heterobilayers as well as twisted MoS2
bilayers, by means of ultralow-frequency Raman spectroscopy. We discovered
Raman features (at 30 ~ 40 cm-1) that arise from the layer-breathing mode (LBM)
vibrations between the two incommensurate TMD monolayers in these structures.
The LBM Raman intensity correlates strongly with the suppression of
photoluminescence that arises from interlayer charge transfer. The LBM is
generated only in bilayer areas with direct layer-layer contact and atomically
clean interface. Its frequency also evolves systematically with the relative
orientation between of the two layers. Our research demonstrates that LBM can
serve as a sensitive probe to the interface environment and interlayer
interactions in van der Waals materials
Molecular genetic characterization of a cluster in A. terreus for biosynthesis of the meroterpenoid terretonin
Meroterpenoids are natural products produced from polyketide and terpenoid precursors. A gene targeting system for A. terreus NIH2624 was developed and a gene cluster for terretonin biosynthesis was characterized. The intermediates and shunt products were isolated from the mutant strains and a pathway for terretonin biosynthesis is proposed. Analysis of two meroterpenoid pathways corresponding to terretonin in A. terreus and austinol in A. nidulans reveals that they are closely related evolutionarily
Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes
Deploying large language models (LLMs) is challenging because they are memory
inefficient and compute-intensive for practical applications. In reaction,
researchers train smaller task-specific models by either finetuning with human
labels or distilling using LLM-generated labels. However, finetuning and
distillation require large amounts of training data to achieve comparable
performance to LLMs. We introduce Distilling step-by-step, a new mechanism that
(a) trains smaller models that outperform LLMs, and (b) achieves so by
leveraging less training data needed by finetuning or distillation. Our method
extracts LLM rationales as additional supervision for training small models
within a multi-task framework. We present three findings across 4 NLP
benchmarks: First, compared to both finetuning and distillation, our mechanism
achieves better performance with much fewer labeled/unlabeled training
examples. Second, compared to few-shot prompted LLMs, we achieve better
performance using substantially smaller model sizes. Third, we reduce both the
model size and the amount of data required to outperform LLMs; our finetuned
770M T5 model outperforms the few-shot prompted 540B PaLM model using only 80%
of available data on a benchmark, whereas standard finetuning the same T5 model
struggles to match even by using 100% of the dataset. We release the code at:
https://github.com/google-research/distilling-step-by-step .Comment: Accepted to Findings of ACL 202
- …