3,993 research outputs found
Role of thermal friction in relaxation of turbulent Bose-Einstein condensates
In recent experiments, the relaxation dynamics of highly oblate, turbulent
Bose-Einstein condensates (BECs) was investigated by measuring the vortex decay
rates in various sample conditions [Phys. Rev. A , 063627 (2014)] and,
separately, the thermal friction coefficient for vortex motion was
measured from the long-time evolution of a corotating vortex pair in a BEC
[Phys. Rev. A , 051601(R) (2015)]. We present a comparative analysis of
the experimental results, and find that the vortex decay rate is
almost linearly proportional to . We perform numerical simulations of
the time evolution of a turbulent BEC using a point-vortex model equipped with
longitudinal friction and vortex-antivortex pair annihilation, and observe that
the linear dependence of on is quantitatively accounted for
in the dissipative point-vortex model. The numerical simulations reveal that
thermal friction in the experiment was too strong to allow for the emergence of
a vortex-clustered state out of decaying turbulence.Comment: 7 pages, 5 figure
Prompt Tuning of Deep Neural Networks for Speaker-adaptive Visual Speech Recognition
Visual Speech Recognition (VSR) aims to infer speech into text depending on
lip movements alone. As it focuses on visual information to model the speech,
its performance is inherently sensitive to personal lip appearances and
movements, and this makes the VSR models show degraded performance when they
are applied to unseen speakers. In this paper, to remedy the performance
degradation of the VSR model on unseen speakers, we propose prompt tuning
methods of Deep Neural Networks (DNNs) for speaker-adaptive VSR. Specifically,
motivated by recent advances in Natural Language Processing (NLP), we finetune
prompts on adaptation data of target speakers instead of modifying the
pre-trained model parameters. Different from the previous prompt tuning methods
mainly limited to Transformer variant architecture, we explore different types
of prompts, the addition, the padding, and the concatenation form prompts that
can be applied to the VSR model which is composed of CNN and Transformer in
general. With the proposed prompt tuning, we show that the performance of the
pre-trained VSR model on unseen speakers can be largely improved by using a
small amount of adaptation data (e.g., less than 5 minutes), even if the
pre-trained model is already developed with large speaker variations. Moreover,
by analyzing the performance and parameters of different types of prompts, we
investigate when the prompt tuning is preferred over the finetuning methods.
The effectiveness of the proposed method is evaluated on both word- and
sentence-level VSR databases, LRW-ID and GRID
- …