159 research outputs found
Transient expression of CHIKV VLP in large stirred tank bioreactors
Please click Additional Files below to see the full abstract
Research on the Transport and Deposition of Nanoparticles in a Rotating Curved Pipe
A finite-volume code and the SIMPLE scheme are used to study the transport and deposition of nanoparticles in a rotating curved pipe for different angular velocities, Dean numbers, and Schmidt numbers. The results show that when the Schmidt number is small, the nanoparticle distributions are mostly determined by the axial velocity. When the Schmidt number is many orders of magnitude larger than 1, the secondary flow will dominate the nanoparticle distribution. When the pipe corotates, the distribution of nanoparticle mass fraction is similar to that for the stationary case. There is a “hot spot” deposition region near the outside edge of bend. When the pipe counter-rotates, the Coriolis force pushes the region with high value of nanoparticle mass fraction toward inside edge of the bend. The hot spot deposition region appears inside the edge. The particle deposition over the whole edge of the bend becomes uniform as the Dean number increases. The corotation of pipe makes the particle deposition efficiency a reduction, while high counter-rotation of pipe only slightly affects the deposition efficiency. When two kinds of secondary flows are coexisting, the relative deposition efficiency is larger than that for the stationary case
All-electrical measurement of spin injection in a magnetic - junction diode
Magnetic - junction diodes are fabricated to investigate spin-polarized
electron transport. The injection of spin-polarized electrons in a
semiconductor is achieved by driving a current from a ferromagnetic injector
(Fe), into a bulk semiconductor (-GaAs) via schottky contact. For detection,
a diluted magnetic semiconductor (-GaMnAs) layer is used. Clear
magnetoresistance was observed only when a high forward bias was applied across
the - junction.Comment: 4 pages, 4 figure
PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales
Neural language models (LMs) have achieved impressive results on various
language-based reasoning tasks by utilizing latent knowledge encoded in their
own pretrained parameters. To make this reasoning process more explicit, recent
works retrieve a rationalizing LM's internal knowledge by training or prompting
it to generate free-text rationales, which can be used to guide task
predictions made by either the same LM or a separate reasoning LM. However,
rationalizing LMs require expensive rationale annotation and/or computation,
without any assurance that their generated rationales improve LM task
performance or faithfully reflect LM decision-making. In this paper, we propose
PINTO, an LM pipeline that rationalizes via prompt-based learning, and learns
to faithfully reason over rationales via counterfactual regularization. First,
PINTO maps out a suitable reasoning process for the task input by prompting a
frozen rationalizing LM to generate a free-text rationale. Second, PINTO's
reasoning LM is fine-tuned to solve the task using the generated rationale as
context, while regularized to output less confident predictions when the
rationale is perturbed. Across four datasets, we show that PINTO significantly
improves the generalization ability of the reasoning LM, yielding higher
performance on both in-distribution and out-of-distribution test sets. Also, we
find that PINTO's rationales are more faithful to its task predictions than
those generated by competitive baselines.Comment: 19 pages, 6 figures, preprin
- …