10 research outputs found

    Musical stimuli for imagery and listening tasks.

    No full text
    <p><i>Note.</i> Recording details: <i>Carmen, Habanera</i>, by St. Mark’s Philharmonic Orchestra. <i>On the Beautiful Blue Danube</i>, by the Orchester der Wiener Staatsoper and Anton Paulik (Brilliant Classics). <i>The Planets, Op. 32, Jupiter, The Bringer of Jollity</i>, by the Philadelphia Orchestra and Eugene Ormandy (RCA Victor).</p>*<p>This stimulus was used for practice trials.</p

    Group means and standard deviations for measures of imagined loudness and covariates.

    No full text
    <p><i>Note.</i> Mean OSPAN and OMSI scores differ between stimuli because different samples of participants met the inclusion criteria for each.</p>a<p>Working memory covariate.</p

    Measures for evaluating imagined and listening loudness profiles.

    No full text
    <p>Measures for evaluating imagined and listening loudness profiles.</p

    Time series analysis of real-time music perception: approaches to the assessment of individual and expertise differences in perception of expressed affect

    No full text
    <div><p>We use time series analysis methods to detect differences between individuals and expertise groups in continuous perceptions of the arousal expressed by Wishart's electroacoustic piece <i>Red Bird</i>. The study is part of a project in which we characterise dynamic perception of the structure and affective expression of music. We find that individual series of perceptions of expressed arousal often show considerable periods of stasis. This may challenge conventional time series methodologies, so we test their validity by application of a general linear autoregressive moving average (GLARMA) approach, which supports it. Acoustic intensity is a dominant predictor of perceived arousal in this piece. We show that responses are time-variant and that animate sounds influence the conditional variance of perceived arousal. Using vector autoregression and cross-sectional time series analysis (which preserves the integrity of each individual response series), we find differences between musical expertise groups (non-musicians, musicians, and electroacoustic musicians). Individual differences within each group are greater than those between expertise groups. The companion paper applies the developed methods to all four pieces in our overall project (Dean, R.T., F. Bailes, and W.T.M. Dunsmuir. 2014. “Shared and Distinct Mechanisms of Individual and Expertise-Group Perception of Expressed Arousal in Four Works.” <i>Journal of Mathematics and Music</i> 8 (3): 207–223). An Online Supplement is available at http://dx.doi.org/10.1080/17459737.2014.928752.</p></div

    Correlations between measures of imagined loudness and covariates.

    No full text
    <p><i>Note</i>. See <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0056052#pone-0056052-t002" target="_blank">Table 2</a> for definitions and descriptions of measure functions. Image-listening similarity and recall ability measures have been subjected to a logarithmic transformation.</p>*<p><i>p</i><0.05.</p>**<p><i>p</i><0.001.</p

    Average ERPs for consonant, dissonant and microtonal intervals for musicians and non-musicians.

    No full text
    <p>Musician (red line) and Non-musician (blue line) group average ERPs at four midline sites (FZ, CZ, PZ, and OZ) for consonant, dissonant, and microtonal intervals.</p

    Mean perceived roughness and liking ratings of consonant, dissonant, and microtonal intervals by musicians and non-musicians.

    No full text
    <p>For roughness, a rating of ‘1’ represents ‘very rough’ and ‘5’ represents ‘very smooth’. For liking, a rating of ‘1’ represents ‘really dislike’ and ‘5’ represents ‘really like’. Asterisks indicate a significant result.</p

    Equal Tempered and Microtonal Stimulus Sets.

    No full text
    <p>All of the two-note chords had a lowest pitch of middle C. Microtonal counterparts to the equal tempered intervals were sharpened by one quarter tone.</p><p>Equal Tempered and Microtonal Stimulus Sets.</p

    Mean ERP amplitude for musicians and non-musicians for consonant, dissonant and microtonal intervals between 404-500ms.

    No full text
    <p>Mean amplitude, and standard error, for the three music theoretic stimulus categories for musicians and non-musicians between 404-500ms.</p
    corecore