25 research outputs found

    Hilbert-Huang versus Morlet wavelet transformation on mismatch negativity of children in uninterrupted sound paradigm

    Get PDF
    Background. Compared to the waveform or spectrum analysis of event-related potentials (ERPs), time-frequency representation (TFR) has the advantage of revealing the ERPs time and frequency domain information simultaneously. As the human brain could be modeled as a complicated nonlinear system, it is interesting from the view of psychological knowledge to study the performance of the nonlinear and linear time-frequency representation methods for ERP research. In this study Hilbert-Huang transformation (HHT) and Morlet wavelet transformation (MWT) were performed on mismatch negativity (MMN) of children. Participants were 102 children aged 8–16 years. MMN was elicited in a passive oddball paradigm with duration deviants. The stimuli consisted of an uninterrupted sound including two alternating 100 ms tones (600 and 800 Hz) with infrequent 50 ms or 30 ms 600 Hz deviant tones. In theory larger deviant should elicit larger MMN. This theoretical expectation is used as a criterion to test two TFR methods in this study. For statistical analysis MMN support to absence ratio (SAR) could be utilized to qualify TFR of MMN. Results. Compared to MWT, the TFR of MMN with HHT was much sharper, sparser, and clearer. Statistically, SAR showed significant difference between the MMNs elicited by two deviants with HHT but not with MWT, and the larger deviant elicited MMN with larger SAR. Conclusion. Support to absence ratio of Hilbert-Huang Transformation on mismatch negativity meets the theoretical expectations, i.e., the more deviant stimulus elicits larger MMN. However, Morlet wavelet transformation does not reveal that. Thus, HHT seems more appropriate in analyzing event-related potentials in the time-frequency domain. HHT appears to evaluate ERPs more accurately and provide theoretically valid information of the brain responses.peerReviewe

    Learn-merge invariance of priors: A characterization of the Dirichlet distributions and processes

    Get PDF
    AbstractLearn-merge invariance is a property of prior distributions (related to postulates introduced by the philosophers W. E. Johnson and R. Carnap) which is defined and discussed within the Bayesian learning model. It is shown that this property in its strong formulation characterizes the Dirichlet distributions and processes. Generalizations towards weaker formulations are outlined

    Learn-merge invariance of priors: A characterization of the Dirichlet distributions and processes

    No full text
    Learn-merge invariance is a property of prior distributions (related to postulates introduced by the philosophers W. E. Johnson and R. Carnap) which is defined and discussed within the Bayesian learning model. It is shown that this property in its strong formulation characterizes the Dirichlet distributions and processes. Generalizations towards weaker formulations are outlined.prior Dirichlet distribution multinomial situation symmetric measures inductive learning
    corecore