161 research outputs found
Glottal-synchronous speech processing
Glottal-synchronous speech processing is a field of speech science where the pseudoperiodicity
of voiced speech is exploited. Traditionally, speech processing involves segmenting
and processing short speech frames of predefined length; this may fail to exploit the inherent
periodic structure of voiced speech which glottal-synchronous speech frames have
the potential to harness. Glottal-synchronous frames are often derived from the glottal
closure instants (GCIs) and glottal opening instants (GOIs).
The SIGMA algorithm was developed for the detection of GCIs and GOIs from
the Electroglottograph signal with a measured accuracy of up to 99.59%. For GCI and
GOI detection from speech signals, the YAGA algorithm provides a measured accuracy
of up to 99.84%. Multichannel speech-based approaches are shown to be more robust to
reverberation than single-channel algorithms.
The GCIs are applied to real-world applications including speech dereverberation,
where SNR is improved by up to 5 dB, and to prosodic manipulation where the importance
of voicing detection in glottal-synchronous algorithms is demonstrated by subjective
testing. The GCIs are further exploited in a new area of data-driven speech modelling,
providing new insights into speech production and a set of tools to aid deployment into
real-world applications. The technique is shown to be applicable in areas of speech coding,
identification and artificial bandwidth extension of telephone speec
Speech assessment and characterization for law enforcement applications
Speech signals acquired, transmitted or stored in non-ideal conditions are often degraded by
one or more effects including, for example, additive noise. These degradations alter the signal
properties in a manner that deteriorates the intelligibility or quality of the speech signal. In
the law enforcement context such degradations are commonplace due to the limitations in
the audio collection methodology, which is often required to be covert. In severe degradation
conditions, the acquired signal may become unintelligible, losing its value in an investigation
and in less severe conditions, a loss in signal quality may be encountered, which can lead to
higher transcription time and cost.
This thesis proposes a non-intrusive speech assessment framework from which algorithms for
speech quality and intelligibility assessment are derived, to guide the collection and transcription
of law enforcement audio. These methods are trained on a large database labelled using
intrusive techniques (whose performance is verified with subjective scores) and shown to perform
favorably when compared with existing non-intrusive techniques. Additionally, a non-intrusive
CODEC identification and verification algorithm is developed which can identify a CODEC with
an accuracy of 96.8 % and detect the presence of a CODEC with an accuracy higher than 97 %
in the presence of additive noise.
Finally, the speech description taxonomy framework is developed, with the aim of characterizing
various aspects of a degraded speech signal, including the mechanism that results in a signal
with particular characteristics, the vocabulary that can be used to describe those degradations
and the measurable signal properties that can characterize the degradations. The taxonomy is
implemented as a relational database that facilitates the modeling of the relationships between
various attributes of a signal and promises to be a useful tool for training and guiding audio
analysts
Analysis of very low quality speech for mask-based enhancement
The complexity of the speech enhancement problem has motivated many different solutions. However, most techniques address situations in which the target speech is fully intelligible and the background noise energy is low in comparison with that of the speech. Thus while current enhancement algorithms can improve the perceived quality, the intelligibility of the speech is not increased significantly and may even be reduced.
Recent research shows that intelligibility of very noisy speech can be improved by the use of a binary mask, in which a binary weight is applied to each time-frequency bin of the input spectrogram. There are several alternative goals for the binary mask estimator, based either on the Signal-to-Noise Ratio (SNR) of each time-frequency bin or on the speech signal characteristics alone. Our approach to the binary mask estimation problem aims to preserve the important speech cues independently of the noise present by identifying time-frequency regions that contain significant speech energy.
The speech power spectrum varies greatly for different types of speech sound. The energy of voiced speech sounds is concentrated in the harmonics of the fundamental frequency while that of unvoiced sounds is, in contrast, distributed across a broad range of frequencies. To identify the presence of speech energy in a noisy speech signal we have therefore developed two detection algorithms. The first is a robust algorithm that identifies voiced speech segments and estimates their fundamental frequency. The second detects the presence of sibilants and estimates their energy distribution. In addition, we have developed a robust algorithm to estimate the active level of the speech. The outputs of these algorithms are combined with other features estimated from the noisy speech to form the input to a classifier which estimates a mask that accurately reflects the time-frequency distribution of speech energy even at low SNR levels. We evaluate a mask-based speech enhancer on a range of speech and noise signals and demonstrate a consistent increase in an objective intelligibility measure with respect to noisy speech.Open Acces
- …