6 research outputs found

    Selection of Learning Algorithm for Musical Tone Stimulated Wavelet De-Noised EEG Signal Classification

    Get PDF
    The task of classifying EEG signals pose a challenge in the selection of which learning algorithm is best to provide higher classification accuracy. In this study, five wellknown learning algorithms used in data mining were utilized. The task is to classify musical tone stimulated wavelet de-noised EEG signals. Classification tasks include whether the EEG signal is tone stimulated or not, and whether the EEG signal is stimulated by either the C, F or G tone. Results show higher correct classification instances (CCI) percentages and accuracies in the first classification task using the J48 decision tree as the learning algorithm. For the second classification task, the k-nn learning algorithm outruns the other classifiers but gave low accuracy and low correct classification percentage. The possibility of increasing the performance was explored by increasing the k (number of neighbors). With the increment, its produced directly proportionate in accuracy and correct classification percentage within a certain value of k. A larger k value will reduce the accuracy and the correct classification percentages

    Chord spacing and quality: Lessons from timbre research

    Get PDF
    Although chords are often represented by pitch-class (chroma) content in computational research, chord spacing is often a more salient feature. This paper addresses this disparity between models and cognition by extending the discrete Fourier transform (DFT) theory of chord quality from pitch-classes to pitches. In doing so, we note a structural similarity between music theory's chord quality and audio engineering's timbral cepstrum: both are DFTs, performed in the pitch or frequency domains, respectively. We thus treat chord spacing as a hybrid of pitch-class and timbre. To investigate the potential benefits of the DFT on pitch space (P-DFT), we perform two computational experiments. The first explores the P-DFT model theoretically by correlating chord distances calculated with a pitch-class model against those calculated with spacing. The second compares P-DFT estimations of chord distances against listener responses (Kuusi, 2005). Our results show that spacing is a salient feature of chords, and that it can be productively described by timbre-influenced methods

    High-resolution 7-Tesla fMRI data on the perception of musical genres – an extension to the studyforrest dataset

    Get PDF
    Here we present an extension to the studyforrest dataset – a versatile resource for studying the behavior of the human brain in situations of real-life complexity (http://studyforrest.org). This release adds more high-resolution, ultra high-field (7 Tesla) functional magnetic resonance imaging (fMRI) data from the same individuals. The twenty participants were repeatedly stimulated with a total of 25 music clips, with and without speech content, from five different genres using a slow event-related paradigm. The data release includes raw fMRI data, as well as precomputed structural alignments for within-subject and group analysis. In addition to fMRI, simultaneously recorded cardiac and respiratory traces, as well the complete implementation of the stimulation paradigm, including stimuli, are provided. An initial quality control analysis reveals distinguishable patterns of response to individual genres throughout a large expanse of areas known to be involved in auditory and speech processing. The present data can be used to, for example, generate encoding models for music perception that can be validated against the previously released fMRI data from stimulation with the “Forrest Gump” audio-movie and its rich musical content. In order to facilitate replicative and derived works, only free and open-source software was utilized

    Population codes representing musical timbre for high-level fMRI categorization of music genres

    No full text
    Abstract. We present experimental evidence in support of distributed neural codes for timbre that are implicated in discrimination of musical styles. We used functional magnetic resonance imaging (fMRI) in humans and multivariate pattern analysis (MVPA) to identify activation patterns that encode the perception of rich music audio stimuli from five different musical styles. We show that musical styles can be automatically classified from population codes in bilateral superior temporal sulcus (STS). To investigate the possible link between the acoustic features of the auditory stimuli and neural population codes in STS, we conducted a representational similarity analysis and a multivariate regression-retrieval task. We found that the similarity structure of timbral features of our stimuli resembled the similarity structure of the STS more than any other type of acoustic feature. We also found that a regression model trained on timbral features outperformed models trained on other types of audio features. Our results show that human brain responses to complex, natural music can be differentiated by timbral audio features, emphasizing the importance of timbre in auditory perception

    Perception and Processing of Pitch and Timbre in Human Cortex

    Get PDF
    University of Minnesota Ph.D. dissertation. 2018. Major: Psychology. Advisor: Andrew Oxenham. 1 computer file (PDF); 157 pages.Pitch and timbre are integral components of auditory perception, yet our understanding of how they interact with one another and how they are processed cortically is enigmatic. Through a series of behavioral studies, neuroimaging, and computational modeling, we investigated these attributes. First, we looked at how variations in one dimension affect our perception of the other. Next, we explored how pitch and timbre are processed in the human cortex, in both a passive listening context and in the presence of attention, using univariate and multivariate analyses. Lastly, we used encoding models to predict cortical responses to timbre using natural orchestral sounds. We found that pitch and timbre interact with each other perceptually, and that musicians and non-musicians are similarly affected by these interactions. Our fMRI studies revealed that, in both passive and active listening conditions, pitch and timbre are processed in largely overlapping regions. However, their patterns of activation are separable, suggesting their underlying circuitry within these regions is unique. Finally, we found that a five-feature, subjectively derived encoding model could predict a significant portion of the variance in the cortical responses to timbre, suggesting our processing of timbral dimensions may align with our perceptual categorizations of them. Taken together, these findings help clarify aspects of both our perception and processing of pitch and timbre

    The evolution of language: Proceedings of the Joint Conference on Language Evolution (JCoLE)

    Get PDF
    corecore