12,256 research outputs found
Towards a style-specific basis for computational beat tracking
Outlined in this paper are a number of sources of evidence, from psychological, ethnomusicological and engineering grounds, to suggest that current approaches to computational beat tracking are incomplete. It is contended that the degree to which cultural knowledge, that is, the specifics of style and associated learnt representational schema, underlie the human faculty of beat tracking has been severely underestimated. Difficulties in building general beat tracking solutions, which can provide both period and phase locking across a large corpus of styles, are highlighted. It is probable that no universal beat tracking model exists which does not utilise a switching model to recognise style and context prior to application
Audio Inpainting
(c) 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works. Published version: IEEE Transactions on Audio, Speech and Language Processing 20(3): 922-932, Mar 2012. DOI: 10.1090/TASL.2011.2168211
Neurophysiological Assessment of Affective Experience
In the field of Affective Computing the affective experience (AX) of the user during the interaction with computers is of great interest. The automatic recognition of the affective state, or emotion, of the user is one of the big challenges. In this proposal I focus on the affect recognition via physiological and neurophysiological signals. Long‐standing evidence from psychophysiological research and more recently from research in affective neuroscience suggests that both, body and brain physiology, are able to indicate the current affective state of a subject. However, regarding the classification of AX several questions are still unanswered. The principal possibility of AX classification was repeatedly shown, but its generalisation over different task contexts, elicitating stimuli modalities, subjects or time is seldom addressed. In this proposal I will discuss a possible agenda for the further exploration of physiological and neurophysiological correlates of AX over different elicitation modalities and task contexts
IDENTIFICATION OF COVER SONGS USING INFORMATION THEORETIC MEASURES OF SIMILARITY
13 pages, 5 figures, 4 tables. v3: Accepted version13 pages, 5 figures, 4 tables. v3: Accepted version13 pages, 5 figures, 4 tables. v3: Accepted versio
The Appreciative Heart: The Psychophysiology of Positive Emotions and Optimal Functioning
This monograph is an overview of Institute of HeartMath's research on the physiological correlates of positive emotions and the science underlying two core HeartMath techniques which supports Heart-Based Living. The heart's connection with love and other positive emotions has survived throughout millennia and across many diverse cultures. New empirical research is providing scientific validation for this age-old association. This 21-page monograph offers a comprehensive understanding of the Institute of HeartMath's cutting-edge research exploring the heart's central role in emotional experience. Described in detail is physiological coherence, a distinct mode of physiological functioning, which is generated during sustained positive emotions and linked with beneficial health and performance-related outcomes. The monograph also provides steps and applications of two HeartMath techniques, Freeze-Frame(R) and Heart Lock-In(R), which engage the heart to help transform stress and produce sustained states of coherence. Data from outcome studies are presented, which suggest that these techniques facilitate a beneficial repatterning process at the mental, emotional and physiological levels
Recommended from our members
The role of HG in the analysis of temporal iteration and interaural correlation
Affect-matching music improves cognitive performance in adults and young children for both positive and negative emotions
Three experiments assessed the hypothesis that cognitive benefits associated with exposure to music only occur when the perceived emotion expression of the music and the participant’s affective state match. Experiment 1 revealed an affect-matching pattern modulated by gender when assessing high-arousal states of opposite valence (happy/angry) in an adult sample (n=94) in which mood classification was based on self-report, and affective valence in music was differentiated by mode and other expressive cues whilst keeping tempo constant (139 BPM). The affect-matching hypothesis was then tested in two experiments with children using a mood-induction procedure: Experiment 2 tested happy/angry emotions with, respectively, 3-5- (n=40) and 6-9-year-old (n=40) children, and Experiment 3 compared happy/sad emotions (i.e., states differing both for valence and arousal profiles) with 3-5-year-old children (n=40), using music pieces differentiated also by fast vs. slow tempo. While young children failed to discriminate systematically between fast tempo music conveying different emotions, they did display cognitive benefits from exposure to affect-matching music when both valence (e.g., mode) and arousal level (e.g., tempo) differentiated the musical excerpts, with no gender effects
Recommended from our members
Improving music genre classification using automatically induced harmony rules
We present a new genre classification framework using both low-level signal-based features and high-level harmony features. A state-of-the-art statistical genre classifier based on timbral features is extended using a first-order random forest containing for each genre rules derived from harmony or chord sequences. This random forest has been automatically induced, using the first-order logic induction algorithm TILDE, from a dataset, in which for each chord the degree and chord category are identified, and covering classical, jazz and pop genre classes. The audio descriptor-based genre classifier contains 206 features, covering spectral, temporal, energy, and pitch characteristics of the audio signal. The fusion of the harmony-based classifier with the extracted feature vectors is tested on three-genre subsets of the GTZAN and ISMIR04 datasets, which contain 300 and 448 recordings, respectively. Machine learning classifiers were tested using 5 × 5-fold cross-validation and feature selection. Results indicate that the proposed harmony-based rules combined with the timbral descriptor-based genre classification system lead to improved genre classification rates
- …