32,228 research outputs found
Diffusive versus displacive contact plasticity of nanoscale asperities: Temperature- and velocity-dependent strongest size
We predict a strongest size for the contact strength when asperity radii of
curvature decrease below ten nanometers. The reason for such strongest size is
found to be correlated with the competition between the dislocation plasticity
and surface diffusional plasticity. The essential role of temperature is
calculated and illustrated in a comprehensive asperity size-strengthtemperature
map taking into account the effect of contact velocity. Such a map should be
essential for various phenomena related to nanoscale contacts such as nanowire
cold welding, self-assembly of nanoparticles and adhesive nano-pillar arrays,
as well as the electrical, thermal and mechanical properties of macroscopic
interfaces
Raman fingerprint of semi-metal WTe2 from bulk to monolayer
Tungsten ditelluride (WTe2), a layered transition-metal dichalcogenide (TMD),
has recently demonstrated an extremely large magnetoresistance effect, which is
unique among TMDs. This fascinating feature seems to be correlated with its
special electronic structure. Here, we report the observation of 6 Raman peaks
corresponding to the A_2^4, A_1^9, A_1^8, A_1^6, A_1^5 and A_1^2 phonons, from
the 33 Raman-active modes predicted for WTe2. This provides direct evidence to
distinguish the space group of WTe2 from that of other TMDs. Moreover, the
Raman evolution of WTe2 from bulk to monolayer is clearly revealed. It is
interesting to find that the A_2^4 mode, centered at ~109.8 cm-1, is forbidden
in a monolayer, which may be attributable to the transition of the point group
from C2v (bulk) to C2h (monolayer). Our work characterizes all observed Raman
peaks in the bulk and few-layer samples and provides a route to study the
physical properties of two-dimensional WTe2.Comment: 19 pages, 4 figures and 2 table
Mandarin Singing Voice Synthesis Based on Harmonic Plus Noise Model and Singing Expression Analysis
The purpose of this study is to investigate how humans interpret musical
scores expressively, and then design machines that sing like humans. We
consider six factors that have a strong influence on the expression of human
singing. The factors are related to the acoustic, phonetic, and musical
features of a real singing signal. Given real singing voices recorded following
the MIDI scores and lyrics, our analysis module can extract the expression
parameters from the real singing signals semi-automatically. The expression
parameters are used to control the singing voice synthesis (SVS) system for
Mandarin Chinese, which is based on the harmonic plus noise model (HNM). The
results of perceptual experiments show that integrating the expression factors
into the SVS system yields a notable improvement in perceptual naturalness,
clearness, and expressiveness. By one-to-one mapping of the real singing signal
and expression controls to the synthesizer, our SVS system can simulate the
interpretation of a real singer with the timbre of a speaker.Comment: 8 pages, technical repor
Affective Music Information Retrieval
Much of the appeal of music lies in its power to convey emotions/moods and to
evoke them in listeners. In consequence, the past decade witnessed a growing
interest in modeling emotions from musical signals in the music information
retrieval (MIR) community. In this article, we present a novel generative
approach to music emotion modeling, with a specific focus on the
valence-arousal (VA) dimension model of emotion. The presented generative
model, called \emph{acoustic emotion Gaussians} (AEG), better accounts for the
subjectivity of emotion perception by the use of probability distributions.
Specifically, it learns from the emotion annotations of multiple subjects a
Gaussian mixture model in the VA space with prior constraints on the
corresponding acoustic features of the training music pieces. Such a
computational framework is technically sound, capable of learning in an online
fashion, and thus applicable to a variety of applications, including
user-independent (general) and user-dependent (personalized) emotion
recognition and emotion-based music retrieval. We report evaluations of the
aforementioned applications of AEG on a larger-scale emotion-annotated corpora,
AMG1608, to demonstrate the effectiveness of AEG and to showcase how
evaluations are conducted for research on emotion-based MIR. Directions of
future work are also discussed.Comment: 40 pages, 18 figures, 5 tables, author versio
- …