764 research outputs found

    Online Detection of Tonal Pop-Out in Modulating Contexts

    Get PDF
    We investigated the spontaneous detection of wrong notes in a melody that modulated continuously through all 24 major and minor keys. Three variations of the melody were composed, each of which had distributed within it 96 test tones of the same pitch, for example, A2. Thus, the test tones would blend into some keys and pop out in others. Participants were not asked to detect or judge specific test tones; rather, they were asked to make a response whenever they heard a note that they thought sounded wrong or out of place. This task enabled us to obtain subjective measures of key membership in a listening situation that approximated a natural musical context. The frequency of observed wrong-note responses across keys matched previous tonal hierarchy results obtained using judgments about discrete probes following short contexts. When the test tones were nondiatonic notes in the present context they elicited a response, whereas when the test tones occupied a prominent position in the tonal hierarchy they were not detected. Our findings could also be explained by the relative salience of the test pitch chroma in short-term memory, such that when the test tone belonged to a locally improbable pitch chroma it was more likely to elicit a response. Regardless of whether the local musical context is shaped primarily by bottom-up or topdown influences, our findings establish a method for estimating the relative salience of individual test events in a continuous melody

    Exploring agency and self-other processing: an FMRI study of dynamic cooperation using an adaptively paced finger tapping task with variable auditory feedback

    No full text
    Ensemble musicians must be flexible and learn to adapt their performance to that of their partners and do so appropriately based on available sensory information streams. To further describe this type of dynamic joint action, we present a functional MRI study of sensorimotor synchronization with an adaptive “virtual partner” (VP). In particular, we investigate the behavioural and neural effects of variable auditory feedback (off, different to or same as pacing tones) associated with finger tapping performance across varying degrees of VP adaptivity (α). We predict that auditory feedback will bias the system to either i) integrate or ii) segregate information regarding “self” or the VP (“other”). We acquired subjective ratings of sense of oneness (self-other merging) and influence (self-other distinction) which we expected to vary with the availability and reliability (ambiguous-same vs. distinct-different) of pertinent auditory information. Behavioural data show a significant interaction effect of auditory feedback on α and synchronization such that distinctive self-other auditory information results in improvements in synchronization, especially at higher levels of α. Furthermore, auditory feedback was seen to have a significant effect on perceived oneness. Specifically, improved synchronization correlates with both ratings of oneness and activation of the precuneus and posterior cingulate, areas thought to integrate external and self generated information. By contrast, a comparison of neural activation in different and same auditory conditions reveals SMA and cerebellum activity. Identification of these structures may be related to greater sensitivity to prediction error as well as self-other distinction necessary for agency processing

    The reaction npppπ{n} {p} \to {p} {p} \pi^{-} from threshold up to 570 MeV

    Full text link
    The reaction npppπ{n} {p} \to {p} {p} \pi^{-} has been studied in a kinematically complete measurement with a large acceptance time-of-flight spectrometer for incident neutron energies between threshold and 570 MeV. The proton-proton invariant mass distributions show a strong enhancement due to the pp(1S0^{1}{S}_{0}) final state interaction. A large anisotropy was found in the pion angular distributions in contrast to the reaction ppppπ0{p}{p} \to {p}{p} \pi^{0}. At small energies, a large forward/backward asymmetry has been observed. From the measured integrated cross section σ(npppπ)\sigma({n}{p} \to {\rm p}{p} \pi^{-}), the isoscalar cross section σ01\sigma_{01} has been extracted. Its energy dependence indicates that mainly partial waves with Sp final states contribute. Note: Due to a coding error, the differential cross sections dσ/dMpp{d \sigma}/{d M_{pp}} as shown in Fig. 9 are too small by a factor of two, and inn Table 3 the differential cross sections dσ/dΩπ{d \sigma}/{d \Omega_{\pi}^{*}} are too large by a factor of 10/2π10/2\pi. The integrated cross sections and all conclusions remain unchanged. A corresponding erratum has been submitted and accepted by European Physics Journal.Comment: 18 pages, 16 figure

    Analysing powers for the reaction npppπ\vec{\rm n} {\rm p} \to {\rm p} {\rm p} \pi^{-} and for np elastic scattering from 270 to 570 MeV

    Full text link
    The analysing power of the reaction npppπ{\rm n}{\rm p} \to {\rm p}{\rm p} \pi^{-} for neutron energies between threshold and 570 MeV has been determined using a transversely polarised neutron beam at PSI. The reaction has been studied in a kinematically complete measurement using a time-of-flight spectrometer with large acceptance. Analysing powers have been determined as a function of the c.m. pion angle in different regions of the proton-proton invariant mass. They are compared to other data from the reactions npppπ{\rm n}{\rm p} \to {\rm p}{\rm p} \pi^{-} and ppppπ0{{\rm p}{\rm p} \to {\rm p}{\rm p} \pi^{0}}. The np elastic scattering analysing power was determined as a by-product of the measurements.Comment: 12 pages, 6 figures, subitted to EPJ-

    Decrypt the groove: Audio features of groove and their importance for auditory-motor interactions

    Get PDF
    When we listen to music we often experience a state that can be described as ”˜in the groove’. This state is characterized by the wish or even the urge to move our body to the musical pulse (Janata et al., 2012; Madison, 2006). A previous study showed that high-groove music modulates the excitability of the motor system, whereas no effect of low-groove music was found (Stupacher et al., 2013). But which musical qualities contribute to the feeling of groove? To answer this question, we extracted audio features of 80 song clips with similar instrumentation and correlated them with subjective groove ratings. Song clips and groove ratings of 19 participants were taken from Janata et al. (2012). The following features were extracted with Matlab’s MIR toolbox (Lartillot & Toiviainen, 2007): RMS energy, spectral flux, sub-band flux, pulse clarity (”˜MaxAutocor’ and ”˜Attack’), and event density. Additionally we used the Genesis Loudness toolbox to compute measures of loudness using the loudness model of Glasberg and Moore (2002).Results showed that groove ratings correlated positively (all ps < .01) with following audio features: RMS energy (r = .37), RMS variability (r = .57), pulse clarity ”˜attack’ (r = .38), spectral flux (r = .34), sub-band flux of band 1 (0-50 Hz, r = .29), and band 2 (50-100 Hz, r = .29). Additionally, groove ratings correlated positively (all ps < .05) with band 3 (100-200 Hz, r = .23), band 5 (400-800 Hz, r = .23), and band 6 (800-1600 Hz, r = .24). The mean loudness of song clips did not affect groove ratings.Since energy in low frequency bands (Burger et al., 2012; Van Dyck et al., 2013), percussiveness (similar to pulse clarity ”˜attack’), and spectral flux (Burger et al., 2012) have previously been shown to affect motor movements, our results indicate that the experience of groove is a phenomenon predominantly based on auditory-motor interactions (cf. Janata et al., 2012; Stupacher et al., 2013).ReferencesBurger, B., Thompson, M. R., Luck, G., Saarikallio, S., & Toiviainen, P. (2012). Music moves us: Beat related musical features influence regularity of music-induced movement. In Proceedings of the 12th International Conference in Music Perception and Cognition and the 8th Triennial Conference of the European Society for the Cognitive Sciences for Music, Thessaloniki, Greece.Glasberg, B. R., & Moore, B. C. J. (2002). A model of loudness applicable to time-varying sounds. Journal Audio Engineering Society, 50, 331–342.Janata, P., Tomic, S. T., & Haberman, J. M. (2012). Sensorimotor coupling in music and the psychology of the groove. Journal of Experimental Psychology. General, 141, 54–75.Lartillot, O., & Toiviainen, P. (2007). A matlab toolbox for musical feature extraction from audio. In Proc. of the 10th Int. Conference on Digital Audio Effects (DAFx-07), Bordeaux, France.Madison, G. (2006). Experiencing groove induced by music: Consistency and phenomenology. Music Perception, 24, 201–208.Stupacher, J., Hove, M. J., Novembre, G., Schütz-Bosbach, S., &Keller, P. E. (2013). Musical groove modulates motor cortex excitability: a TMS investigation. Brain and Cognition, 82, 127–136.Van Dyck, E., Moelants, D., Demey, M., Deweppe, A., Coussement, P., & Leman, M. (2013). The impact of the bass drum on human dance movement. Music Perception, 30, 349–359

    A linear oscillator model predicts dynamic temporal attention and pupillary entrainment to rhythmic patterns

    Get PDF
    Rhythm is a ubiquitous feature of music that induces specific neural modes of processing. In this paper, we assess the potential of a stimulus-driven linear oscillator model (Tomic & Janata, 2008) to predict dynamic attention to complex musical rhythms on an instant-by-instant basis. We use perceptual thresholds and pupillometry as attentional indices against which to test our model pre- dictions. During a deviance detection task, participants listened to continuously looping, multi- instrument, rhythmic patterns, while being eye-tracked. Their task was to respond anytime they heard an increase in intensity (dB SPL). An adaptive thresholding algorithm adjusted deviant in- tensity at multiple probed temporal locations throughout each rhythmic stimulus. The oscillator model predicted participants’ perceptual thresholds for detecting deviants at probed locations, with a low temporal salience prediction corresponding to a high perceptual threshold and vice versa. A pupil dilation response was observed for all deviants. Notably, the pupil dilated even when partic- ipants did not report hearing a deviant. Maximum pupil size and resonator model output were sig- nificant predictors of whether a deviant was detected or missed on any given trial. Besides the evoked pupillary response to deviants, we also assessed the continuous pupillary signal to the rhythmic patterns. The pupil exhibited entrainment at prominent periodicities present in the stimuli and followed each of the different rhythmic patterns in a unique way. Overall, these results repli- cate previous studies using the linear oscillator model to predict dynamic attention to complex auditory scenes and extend the utility of the model to the prediction of neurophysiological signals, in this case the pupillary time course; however, we note that the amplitude envelope of the acoustic patterns may serve as a similarly useful predictor. To our knowledge, this is the first paper to show entrainment of pupil dynamics by demonstrating a phase relationship between musical stimuli and the pupillary signal

    A compact statistical model of the song syntax in Bengalese finch

    Get PDF
    Songs of many songbird species consist of variable sequences of a finite number of syllables. A common approach for characterizing the syntax of these complex syllable sequences is to use transition probabilities between the syllables. This is equivalent to the Markov model, in which each syllable is associated with one state, and the transition probabilities between the states do not depend on the state transition history. Here we analyze the song syntax in a Bengalese finch. We show that the Markov model fails to capture the statistical properties of the syllable sequences. Instead, a state transition model that accurately describes the statistics of the syllable sequences includes adaptation of the self-transition probabilities when states are repeatedly revisited, and allows associations of more than one state to the same syllable. Such a model does not increase the model complexity significantly. Mathematically, the model is a partially observable Markov model with adaptation (POMMA). The success of the POMMA supports the branching chain network hypothesis of how syntax is controlled within the premotor song nucleus HVC, and suggests that adaptation and many-to-one mapping from neural substrates to syllables are important features of the neural control of complex song syntax
    corecore