3,867 research outputs found
Validation of the South Korean Version of the Beliefs about Emotions Scale
Background
Beliefs about the unacceptability of experiencing or expressing negative emotions can contribute to diverse psychological symptoms and associated with poor treatment outcomes and low treatment attempts. The Beliefs about Emotions Scale (BES) was developed to assess such beliefs based on the cognitive-behavioral models; however, no study has reported on the psychometric properties of the BES in Korea. The present study aimed to cross-culturally adapt and validate the BES for the Korean population (BES-K). Methods
The BES-K was administered to 592 Korean adults (323 men and 269 women) aged 20–59 years. Exploratory and confirmatory factor analysis were used to assess the factor model of the scale. Pearson correlation coefficients were used to evaluate the relationships between the BES-K and other psychological measures. Results
The result showed a two-factor model of the BES-K, with Factor 1 relating to Interpersonal and Factor 2 representing Intrapersonal aspects. The scale had significant yet moderately low correlations with measures of depression, anxiety, and difficulties in emotion regulation. Conclusion
The BES-K is a useful instrument in evaluating the beliefs about emotions in the Korean population
Contextual Linear Bandits under Noisy Features: Towards Bayesian Oracles
We study contextual linear bandit problems under uncertainty on features;
they are noisy with missing entries. To address the challenges from the noise,
we analyze Bayesian oracles given observed noisy features. Our Bayesian
analysis finds that the optimal hypothesis can be far from the underlying
realizability function, depending on noise characteristics, which is highly
non-intuitive and does not occur for classical noiseless setups. This implies
that classical approaches cannot guarantee a non-trivial regret bound. We thus
propose an algorithm aiming at the Bayesian oracle from observed information
under this model, achieving regret bound with respect to
feature dimension and time horizon . We demonstrate the proposed
algorithm using synthetic and real-world datasets.Comment: 30 page
Emotion regulation from a virtue perspective
Background
The ability to regulate one’s emotional state is an important predictor of several behaviors such as reframing a challenging situation to reduce anger or anxiety, concealing visible signs of sadness or fear, or focusing on reasons to feel happy or calm. This capacity is referred to as emotion regulation. Deficits in this ability can adversely affect one’s adaptive coping, thus are associated with a variety of other psychopathological symptoms, including but not limited to depression, borderline personality disorder, substance use disorders, eating disorders, and somatoform disorders. Methods
The present study examined emotion regulation in relation to the virtue-based psychosocial adaptation model (V-PAM). 595 participants were clustered based on their Difficulties in Emotion Regulation Scale (DERS) score, producing two clusters (i.e., high functioning vs. low functioning). Then, emotion regulation group membership was discriminated by using five V-PAM virtue constructs, including courage, integrity, practical wisdom, committed action, and emotional transcendence. Results
Results show that five virtues contribute to differentiating group membership. Practical wisdom was the strongest contributor, followed by integrity, emotional transcendence, committed action, and courage. Predictive discriminant analysis was conducted and 71% of cases were correctly classified. A discussion of the relationship between emotion regulation and virtues was elaborated. Conclusion
The concept of virtue holds significant importance in the comprehension of an individual’s capacity to regulate their emotions, meriting future study.
Methods: The present study examined emotion regulation in relation to the virtue-based psychosocial adaptation model (V-PAM). 595 participants were clustered based on their Difficulties in Emotion Regulation Scale (DERS) score, producing two clusters (i.e., high functioning vs. low functioning). Then, emotion regulation group membership was discriminated by using five V-PAM virtue constructs, including courage, integrity, practical wisdom, committed action, and emotional transcendence.
Results: Results show that five virtues contribute to differentiating group membership. Practical wisdom was the strongest contributor, followed by integrity, emotional transcendence, committed action, and courage. Predictive discriminant analysis was conducted and 71% of cases were correctly classified. A discussion of the relationship between emotion regulation and virtues was elaborated.
Conclusion: The concept of virtue holds significant importance in the comprehension of an individual\u27s capacity to regulate their emotions, meriting future study
Lip Reading for Low-resource Languages by Learning and Combining General Speech Knowledge and Language-specific Knowledge
This paper proposes a novel lip reading framework, especially for
low-resource languages, which has not been well addressed in the previous
literature. Since low-resource languages do not have enough video-text paired
data to train the model to have sufficient power to model lip movements and
language, it is regarded as challenging to develop lip reading models for
low-resource languages. In order to mitigate the challenge, we try to learn
general speech knowledge, the ability to model lip movements, from a
high-resource language through the prediction of speech units. It is known that
different languages partially share common phonemes, thus general speech
knowledge learned from one language can be extended to other languages. Then,
we try to learn language-specific knowledge, the ability to model language, by
proposing Language-specific Memory-augmented Decoder (LMDecoder). LMDecoder
saves language-specific audio features into memory banks and can be trained on
audio-text paired data which is more easily accessible than video-text paired
data. Therefore, with LMDecoder, we can transform the input speech units into
language-specific audio features and translate them into texts by utilizing the
learned rich language knowledge. Finally, by combining general speech knowledge
and language-specific knowledge, we can efficiently develop lip reading models
even for low-resource languages. Through extensive experiments using five
languages, English, Spanish, French, Italian, and Portuguese, the effectiveness
of the proposed method is evaluated.Comment: Accepted at ICCV 202
AKVSR: Audio Knowledge Empowered Visual Speech Recognition by Compressing Audio Knowledge of a Pretrained Model
Visual Speech Recognition (VSR) is the task of predicting spoken words from
silent lip movements. VSR is regarded as a challenging task because of the
insufficient information on lip movements. In this paper, we propose an Audio
Knowledge empowered Visual Speech Recognition framework (AKVSR) to complement
the insufficient speech information of visual modality by using audio modality.
Different from the previous methods, the proposed AKVSR 1) utilizes rich audio
knowledge encoded by a large-scale pretrained audio model, 2) saves the
linguistic information of audio knowledge in compact audio memory by discarding
the non-linguistic information from the audio through quantization, and 3)
includes Audio Bridging Module which can find the best-matched audio features
from the compact audio memory, which makes our training possible without audio
inputs, once after the compact audio memory is composed. We validate the
effectiveness of the proposed method through extensive experiments, and achieve
new state-of-the-art performances on the widely-used datasets, LRS2 and LRS3
- …