9,020 research outputs found
Unsupervised Voice Activity Detection by Modeling Source and System Information using Zero Frequency Filtering
Voice activity detection (VAD) is an important pre-processing step for speech
technology applications. The task consists of deriving segment boundaries of
audio signals which contain voicing information. In recent years, it has been
shown that voice source and vocal tract system information can be extracted
using zero-frequency filtering (ZFF) without making any explicit model
assumptions about the speech signal. This paper investigates the potential of
zero-frequency filtering for jointly modeling voice source and vocal tract
system information, and proposes two approaches for VAD. The first approach
demarcates voiced regions using a composite signal composed of different
zero-frequency filtered signals. The second approach feeds the composite signal
as input to the rVAD algorithm. These approaches are compared with other
supervised and unsupervised VAD methods in the literature, and are evaluated on
the Aurora-2 database, across a range of SNRs (20 to -5 dB). Our studies show
that the proposed ZFF-based methods perform comparable to state-of-art VAD
methods and are more invariant to added degradation and different channel
characteristics.Comment: Accepted at Interspeech 202
Green Cellular Networks: A Survey, Some Research Issues and Challenges
Energy efficiency in cellular networks is a growing concern for cellular
operators to not only maintain profitability, but also to reduce the overall
environment effects. This emerging trend of achieving energy efficiency in
cellular networks is motivating the standardization authorities and network
operators to continuously explore future technologies in order to bring
improvements in the entire network infrastructure. In this article, we present
a brief survey of methods to improve the power efficiency of cellular networks,
explore some research issues and challenges and suggest some techniques to
enable an energy efficient or "green" cellular network. Since base stations
consume a maximum portion of the total energy used in a cellular system, we
will first provide a comprehensive survey on techniques to obtain energy
savings in base stations. Next, we discuss how heterogeneous network deployment
based on micro, pico and femto-cells can be used to achieve this goal. Since
cognitive radio and cooperative relaying are undisputed future technologies in
this regard, we propose a research vision to make these technologies more
energy efficient. Lastly, we explore some broader perspectives in realizing a
"green" cellular network technologyComment: 16 pages, 5 figures, 2 table
A discrete wavelet transform-based voice activity detection and noise classification with sub-band selection
A real-time discrete wavelet transform-based adaptive voice activity detector and sub-band selection for feature extraction are proposed for noise classification, which can be used in a speech processing pipeline. The voice activity detection and sub-band selection rely on wavelet energy features and the feature extraction process involves the extraction of mel-frequency cepstral coefficients from selected wavelet sub-bands and mean absolute values of all sub-bands. The method combined with a feedforward neural network with two hidden layers could be added to speech enhancement systems and deployed in hearing devices such as cochlear implants. In comparison to the conventional short-time Fourier transform-based technique, it has higher F1 scores and classification accuracies (with a mean of 0.916 and 90.1%, respectively) across five different noise types (babble, factory, pink, Volvo (car) and white noise), a significantly smaller feature set with 21 features, reduced memory requirement, faster training convergence and about half the computational cost
Recognizing Voice Over IP: A Robust Front-End for Speech Recognition on the World Wide Web
The Internet Protocol (IP) environment poses two relevant sources of distortion to the speech recognition problem: lossy speech coding and packet loss. In this paper, we propose a new front-end for speech recognition over IP networks. Specifically, we suggest extracting the recognition feature vectors directly from the encoded speech (i.e., the bit stream) instead of decoding it and subsequently extracting the feature vectors. This approach offers two significant benefits. First, the recognition system is only affected by the quantization distortion of the spectral envelope. Thus, we are avoiding the influence of other sources of distortion due to the encoding-decoding process. Second, when packet loss occurs, our front-end becomes more effective since it is not constrained to the error handling mechanism of the codec. We have considered the ITU G.723.1 standard codec, which is one of the most preponderant coding algorithms in voice over IP (VoIP) and compared the proposed front-end with the conventional approach in two automatic speech recognition (ASR) tasks, namely, speaker-independent isolated digit recognition and speaker-independent continuous speech recognition. In general, our approach outperforms the conventional procedure, for a variety of simulated packet loss rates. Furthermore, the improvement is higher as network conditions worsen.Publicad
Development of Auditory Selective Attention: Why Children Struggle to Hear in Noisy Environments
Children’s hearing deteriorates markedly in the presence of unpredictable noise. To explore why, 187 school-age children (4–11 years) and 15 adults performed a tone-in-noise detection task, in which the masking noise varied randomly between every presentation. Selective attention was evaluated by measuring the degree to which listeners were influenced by (i.e., gave weight to) each spectral region of the stimulus. Psychometric fits were also used to estimate levels of internal noise and bias. Levels of masking were found to decrease with age, becoming adult-like by 9–11 years. This change was explained by improvements in selective attention alone, with older listeners better able to ignore noise similar in frequency to the target. Consistent with this, age-related differences in masking were abolished when the noise was made more distant in frequency to the target. This work offers novel evidence that improvements in selective attention are critical for the normal development of auditory judgments
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
Distinct Cortical Pathways for Music and Speech Revealed by Hypothesis-Free Voxel Decomposition
The organization of human auditory cortex remains unresolved, due in part to the small stimulus sets common to fMRI studies and the overlap of neural populations within voxels. To address these challenges, we measured fMRI responses to 165 natural sounds and inferred canonical response profiles ("components") whose weighted combinations explained voxel responses throughout auditory cortex. This analysis revealed six components, each with interpretable response characteristics despite being unconstrained by prior functional hypotheses. Four components embodied selectivity for particular acoustic features (frequency, spectrotemporal modulation, pitch). Two others exhibited pronounced selectivity for music and speech, respectively, and were not explainable by standard acoustic features. Anatomically, music and speech selectivity concentrated in distinct regions of non-primary auditory cortex. However, music selectivity was weak in raw voxel responses, and its detection required a decomposition method. Voxel decomposition identifies primary dimensions of response variation across natural sounds, revealing distinct cortical pathways for music and speech.National Eye Institute (Grant EY13455
Recommended from our members
The role of HG in the analysis of temporal iteration and interaural correlation
Egocentric Auditory Attention Localization in Conversations
In a noisy conversation environment such as a dinner party, people often
exhibit selective auditory attention, or the ability to focus on a particular
speaker while tuning out others. Recognizing who somebody is listening to in a
conversation is essential for developing technologies that can understand
social behavior and devices that can augment human hearing by amplifying
particular sound sources. The computer vision and audio research communities
have made great strides towards recognizing sound sources and speakers in
scenes. In this work, we take a step further by focusing on the problem of
localizing auditory attention targets in egocentric video, or detecting who in
a camera wearer's field of view they are listening to. To tackle the new and
challenging Selective Auditory Attention Localization problem, we propose an
end-to-end deep learning approach that uses egocentric video and multichannel
audio to predict the heatmap of the camera wearer's auditory attention. Our
approach leverages spatiotemporal audiovisual features and holistic reasoning
about the scene to make predictions, and outperforms a set of baselines on a
challenging multi-speaker conversation dataset. Project page:
https://fkryan.github.io/saa
Improving Quality of Life: Home Care for Chronically Ill and Elderly People
In this chapter, we propose a system especially created for elderly or chronically ill people that are with special needs and poor familiarity with technology. The system combines home monitoring of physiological and emotional states through a set of wearable sensors, user-controlled (automated) home devices, and a central control for integration of the data, in order to provide a safe and friendly environment according to the limited capabilities of the users. The main objective is to create the easy, low-cost automation of a room or house to provide a friendly environment that enhances the psychological condition of immobilized users. In addition, the complete interaction of the components provides an overview of the physical and emotional state of the user, building a behavior pattern that can be supervised by the care giving staff. This approach allows the integration of physiological signals with the patient’s environmental and social context to obtain a complete framework of the emotional states
- …