214 research outputs found

    Characterising evoked potential signals using wavelet transform singularity detection

    Get PDF
    This research set out to develop a novel technique to decompose Electroencephalograph (EEG) signal into sets of constituent peaks in order to better describe the underlying nature of these signals. It began with the question; can a localised, single stimulation of sensory nervous tissue in the body be detected in the brain? Flash Visual Evoked Potential (VEP) tests were carried out on 3 participants by presenting a flash and recording the response in the occipital region of the cortex. By focussing on analysis techniques that retain a perspective across different domains - temporal (time), spectral (frequency/scale) and epoch (multiple events) - useful information was detected across multiple domains, which is not possible in single domain transform techniques. A comprehensive set of algorithms to decompose evoked potential data into sets of peaks was developed and test ed using wavelet transform singularity detection methods. The set of extracted peaks then forms the basis for a subsequent clustering analysis which identifies sets of localised peaks that contribute the most towards the standard evoked response. The technique is quite novel as no closely similar work in research has been identified. New and valuable insights into the nature of an evoked potential signal have been identified. Although the number of stimuli required to calculate an Evoked Potential response has not been reduced, the amount of data contributing to this response has been effectively reduced by 75%. Therefore better examination of a small subset of the evoked potential data is possible. Furthermore, the response has been meaningfully decomposed into a small number (circa 20) of constituent peaksets that are defined in terms of the peak shape (time location, peak width and peak height) and number of peaks within the peak set. The question of why some evoked potential components appear mor e strongly than others is probed by this technique. Delineation between individual peak sizes and how often they occur is for the first time possible and this representation helps to provide an understanding of how particular evoked potentials components are made up. A major advantage of this techniques is the there are no pre-conditions, constraints or limitations. These techniques are highly relevant to all evoked potential modalities and other brain signal response applications - such as in brain-computer interface applications. Overall, a novel evoked potential technique has been described and tested. The results provide new insights into the nature of evoked potential peaks with potential application across various evoked potential modalities

    Analysis of the structure of time-frequency information in electromagnetic brain signals

    Get PDF
    This thesis encompasses methodological developments and experimental work aimed at revealing information contained in time, frequency, and time–frequency representations of electromagnetic, specifically magnetoencephalographic, brain signals. The work can be divided into six endeavors. First, it was shown that sound slopes increasing in intensity from undetectable to audible elicit event-related responses (ERRs) that predict behavioral sound detection. This provides an opportunity to use non-invasive brain measures in hearing assessment. Second, the actively debated generation mechanism of ERRs was examined using novel analysis techniques, which showed that auditory stimulation did not result in phase reorganization of ongoing neural oscillations, and that processes additive to the oscillations accounted for the generation of ERRs. Third, the prerequisites for the use of continuous wavelet transform in the interrogation of event-related brain processes were established. Subsequently, it was found that auditory stimulation resulted in an intermittent dampening of ongoing oscillations. Fourth, information on the time–frequency structure of ERRs was used to reveal that, depending on measurement condition, amplitude differences in averaged ERRs were due to changes in temporal alignment or in amplitudes of the single-trial ERRs. Fifth, a method that exploits mutual information of spectral estimates obtained with several window lengths was introduced. It allows the removal of frequency-dependent noise slopes and the accentuation of spectral peaks. Finally, a two-dimensional statistical data representation was developed, wherein all frequency components of a signal are made directly comparable according to spectral distribution of their envelope modulations by using the fractal property of the wavelet transform. This representation reveals noise buried processes and describes their envelope behavior. These examinations provide for two general conjectures. The stability of structures, or the level of stationarity, in a signal determines the appropriate analysis method and can be used as a measure to reveal processes that may not be observable with other available analysis approaches. The results also indicate that transient neural activity, reflected in ERRs, is a viable means of representing information in the human brain.reviewe

    Real-time spectral modelling of audio for creative sound transformation

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Convolutional Methods for Music Analysis

    Get PDF

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The Models and Analysis of Vocal Emissions with Biomedical Applications (MAVEBA) workshop came into being in 1999 from the particularly felt need of sharing know-how, objectives and results between areas that until then seemed quite distinct such as bioengineering, medicine and singing. MAVEBA deals with all aspects concerning the study of the human voice with applications ranging from the neonate to the adult and elderly. Over the years the initial issues have grown and spread also in other aspects of research such as occupational voice disorders, neurology, rehabilitation, image and video analysis. MAVEBA takes place every two years always in Firenze, Italy

    Glottal-synchronous speech processing

    No full text
    Glottal-synchronous speech processing is a field of speech science where the pseudoperiodicity of voiced speech is exploited. Traditionally, speech processing involves segmenting and processing short speech frames of predefined length; this may fail to exploit the inherent periodic structure of voiced speech which glottal-synchronous speech frames have the potential to harness. Glottal-synchronous frames are often derived from the glottal closure instants (GCIs) and glottal opening instants (GOIs). The SIGMA algorithm was developed for the detection of GCIs and GOIs from the Electroglottograph signal with a measured accuracy of up to 99.59%. For GCI and GOI detection from speech signals, the YAGA algorithm provides a measured accuracy of up to 99.84%. Multichannel speech-based approaches are shown to be more robust to reverberation than single-channel algorithms. The GCIs are applied to real-world applications including speech dereverberation, where SNR is improved by up to 5 dB, and to prosodic manipulation where the importance of voicing detection in glottal-synchronous algorithms is demonstrated by subjective testing. The GCIs are further exploited in a new area of data-driven speech modelling, providing new insights into speech production and a set of tools to aid deployment into real-world applications. The technique is shown to be applicable in areas of speech coding, identification and artificial bandwidth extension of telephone speec

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF
    • …
    corecore