195 research outputs found

    Speech/Music Discrimination using Entropy and Dynamism Features in a HMM Classification Framework

    Get PDF
    In this paper, we present a new approach towards high performance speech/music discrimination on realistic tasks related to the automatic transcription of broadcast news. In the approach presented here, the (local) Probability Density Function (PDF) estimators trained on clean, microphone, speech (as used in a standard large vocabulary speech recognition system) are used as a channel model at the output of which the entropy and ``dynamism'' will be measured and integrated over time through a 2-state (speech and and non-speech) hidden Markov model (HMM) with minimum duration constraints. Indeed, in the case of entropy, it is clear that, on average, the entropy at the output of the local PDF estimators will be larger for speech signals than non-speech signals presented at their input. In our case, local probabilities will be estimated from an multilayer perceptron (MLP) as used in hybrid HMM/MLP systems, thus guaranteeing the use of ``real'' probabilities in the estimation of the entropy. The 2-state speech/non-speech HMM will thus take these two dimensional features (entropy and ``dynamism'') whose distributions will be modeled through (two-dimensional) multi-Gaussian densities or an MLP, whose parameters are trained through a Viterbi algorithm.\\ Different experiments, including different speech and music styles, as well as different (a priori) distributions of the speech and music signals (real data distribution, mostly speech, or mostly music), will illustrate the robustness of the approach, always resulting in a correct segmentation performance higher than 90\%. Finally, we will show how a confidence measure can be used to further improve the segmentation results, and also discuss how this may be used to extend the technique to the case of speech/music mixtures

    Clustering And Segmenting Speakers And Their Locations In Meetings

    Get PDF
    This paper presents a new approach toward automatic annotation of meetings in terms of speaker identities and their locations. This is achieved by segmenting the audio recordings using two independent sources of information : magnitude spectrum analysis and sound source localization. We combine the two in an appropriate HMM framework. There are three main advantages of this approach. First, it is completely unsupervised, i.e. speaker identities and number of speakers and locations are automatically inferred. Second, it is threshold-free, i.e. the decisions are made without the need of a threshold value which generally requires an additional development dataset. The third advantage is that the joint segmentation improves over the speaker segmentation derived using only acoustic features. Experiments on a series of meetings recorded in the IDIAP Smart Meeting Room demonstrate the effectiveness of this approach

    Inhaled anticholinergic use and all-cause mortality among elderly Medicare beneficiaries with chronic obstructive pulmonary disease

    Get PDF
    Background: The purpose of this study was to examine the association between use of inhaled anticholinergics and all-cause mortality among elderly individuals with chronic obstructive pulmonary disease (COPD), after controlling for demographic, socioeconomic, health, functional status, smoking, and obesity.Methods: We used a retrospective longitudinal panel data design. Data were extracted for multiple years (2002–2009) of the Medicare Current Beneficiary Survey (MCBS) linked with fee-for-service Medicare claims. Generic and brand names of inhaled anticholinergics were used to identify inhaled anticholinergic utilization from the self-reported prescription medication files. All-cause mortality was assessed using the vital status variable. Unadjusted group differences in mortality rates were tested using the chi-square statistic. Multivariable logistic regressions with independent variables entered in separate blocks were used to analyze the association between inhaled anticholinergic use and all-cause mortality. All analyses accounted for the complex design of the MCBS.Results: Overall, 19.4% of the elderly Medicare beneficiaries used inhaled anticholinergics. Inhaled anticholinergic use was significantly higher (28.5%) among those who reported poor health compared with those reporting excellent or very good health (12.7%). Bivariate analyses indicated that inhaled anticholinergic use was associated with significantly higher rates of all-cause mortality (18.7%) compared with nonusers (13.6%). However, multivariate analyses controlling for risk factors did not suggest an increased likelihood of all-cause mortality (adjusted odds ratio 1.26, 95% confidence interval 0.95–1.67).Conclusion: Use of inhaled anticholinergics among elderly individuals with COPD is potentially safe in terms of all-cause mortality when we adjust for baseline risk factors

    Robust Audio Segmentation

    Get PDF
    Audio segmentation, in general, is the task of segmenting a continuous audio stream in terms of acoustically homogenous regions, where the rule of homogeneity depends on the task. This thesis aims at developing and investigating efficient, robust and unsupervised techniques for three important tasks related to audio segmentation, namely speech/music segmentation, speaker change detection and speaker clustering. The speech/music segmentation technique proposed in this thesis is based on the functioning of a HMM/ANN hybrid ASR system where an MLP estimates the posterior probabilities of different phonemes. These probabilities exhibit a particular pattern when the input is a speech signal. This pattern is captured in the form of feature vectors, which are then integrated in a HMM framework. The technique thus segments the audio data in terms of {\it recognizable} and {\it non-recognizable} segments. The efficiency of the proposed technique is demonstrated by a number of experiments conducted on broadcast news data exhibiting real-life scenarios (different speech and music styles, overlapping speech and music, non-speech sounds other than music, etc.). A novel distance metric is proposed in this thesis for the purpose of finding speaker segment boundaries (speaker change detection). The proposed metric can be seen as special case of Log Likelihood Ratio (LLR) or Bayesian Information Criterion (BIC), where the number of parameters in the two models (or hypotheses) is forced to be equal. However, the advantage of the proposed metric over LLR, BIC and other metric based approaches is that it achieves comparable performance without requiring an adjustable threshold/penalty term, hence also eliminating the need for a development dataset. Speaker clustering is the task of unsupervised classification of the audio data in terms of speakers. For this purpose, a novel HMM based agglomerative clustering algorithm is proposed where, starting from a large number of clusters, {\it closest} clusters are merged in an iterative process. A novel merging criterion is proposed for this purpose, which does not require an adjustable threshold value and hence the stopping criterion is also automatically met when there are no more clusters left for merging. The efficiency of the proposed algorithm is demonstrated with various experiments on broadcast news data and it is shown that the proposed criterion outperforms the use of LLR, when LLR is used with an optimal threshold value. These tasks obviously play an important role in the pre-processing stages of ASR. For example, correctly identifying {\it non-recognizable} segments in the audio stream and excluding them from recognition saves computation time in ASR and results in more meaningful transcriptions. Moreover, researchers have clearly shown the positive impact of further clustering of identified speech segments in terms of speakers (speaker clustering) on the transcription accuracy. However, we note that this processing has various other interesting and practical applications. For example, this provides characteristic information about the data (metadata), which is useful for the indexing of audio documents. One such application is investigated in this thesis which extracts this metadata and combines it with the ASR output, resulting in Rich Transcription (RT) which is much easier to understand for an end-user. In a further application, speaker clustering was combined with precise location information available in scenarios like smart meeting rooms to segment the meeting recordings jointly in terms of speakers and their locations in a meeting room. This is useful for automatic meeting summarization as it enables answering of questions like ``who is speaking and where''. This could be used to access, for example, a specific presentation made by a particular speaker or all the speech segments belonging to a particular speaker

    Robust HMM-Based Speech/Music Segmentation

    Get PDF
    In this paper we present a new approach towards high performance speech/music segmentation on realistic tasks related to the automatic transcription of broadcast news. In the approach presented here, the local probability density function (PDF) estimators trained on clean microphone speech are used as a channel model at the output of which the entropy and ``dynamism'' will be measured and integrated over time through a 2-state (speech and and non-speech) hidden Markov model (HMM) with minimum duration constraints. The parameters of the HMM are trained using the EM algorithm in a completely unsupervised manner. Different experiments, including a variety of speech and music styles, as well as different segment durations of speech and music signals (real data distribution, mostly speech, or mostly music), will illustrate the robustness of the approach, which in each case achieves a frame-level accuracy greater than 94\%

    An Online Audio Indexing System

    Get PDF
    This paper presents overview of an online audio indexing system, which creates a searchable index of speech content embedded in digitized audio files. This system is based on our recently proposed offline audio segmentation techniques. As the data arrives continuously, the system first finds boundaries of the acoustically homogenous segments. Next, each of these segments is classified as speech, music or {\it mixture} classes, where mixtures are defined as regions where speech and other non-speech sounds are present simultaneously and noticeably. The speech segments are then clustered together to provide consistent speaker labels. The speech and mixture segments are converted to text via an ASR system. The resulting words are time-stamped together with other metadata information (speaker identity, speech confidence score) in an XML file to rapidly identify and access target segments. In this paper, we analyze the performance at each stage of this audio indexing system and also compare it with the performance of the corresponding offline modules

    Multi-Scale Simulation Modeling for Prevention and Public Health Management of Diabetes in Pregnancy and Sequelae

    Full text link
    Diabetes in pregnancy (DIP) is an increasing public health priority in the Australian Capital Territory, particularly due to its impact on risk for developing Type 2 diabetes. While earlier diagnostic screening results in greater capacity for early detection and treatment, such benefits must be balanced with the greater demands this imposes on public health services. To address such planning challenges, a multi-scale hybrid simulation model of DIP was built to explore the interaction of risk factors and capture the dynamics underlying the development of DIP. The impact of interventions on health outcomes at the physiological, health service and population level is measured. Of particular central significance in the model is a compartmental model representing the underlying physiological regulation of glycemic status based on beta-cell dynamics and insulin resistance. The model also simulated the dynamics of continuous BMI evolution, glycemic status change during pregnancy and diabetes classification driven by the individual-level physiological model. We further modeled public health service pathways providing diagnosis and care for DIP to explore the optimization of resource use during service delivery. The model was extensively calibrated against empirical data.Comment: 10 pages, SBP-BRiMS 201

    Unknown-Multiple Speaker clustering using HMM

    Get PDF
    An HMM-based speaker clustering framework is presented, where the number of speakers and segmentation boundaries are unknown \emph{a priori}. Ideally, the system aims to create one pure cluster for each speaker. The HMM is ergodic in nature with a minimum duration topology. The final number of clusters is determined automatically by merging closest clusters and retraining this new cluster, until a decrease in likelihood is observed. In the same framework, we also examine the effect of using only the features from highly voiced frames as a means of improving the robustness and computational complexity of the algorithm. The proposed system is assessed on the 1996 HUB-4 evaluation test set in terms of both cluster and speaker purity. It is shown that the number of clusters found often correspond to the actual number of speakers

    An HMM-Based Framework for Supporting Accurate Classification of Music Datasets

    Get PDF
    open3In this paper, we use Hidden Markov Models (HMM) and Mel-Frequency Cepstral Coecients (MFCC) to build statistical models of classical music composers directly from the music datasets. Several musical pieces are divided by instruments (String, Piano, Chorus, Orchestra), and, for each instrument, statistical models of the composers are computed.We selected 19 dierent composers spanning four centuries by using a total number of 400 musical pieces. Each musical piece is classied as belonging to a composer if the corresponding HMM gives the highest likelihood for that piece. We show that the so-developed models can be used to obtain useful information on the correlation between the composers. Moreover, by using the maximum likelihood approach, we also classied the instrumentation used by the same composer. Besides as an analysis tool, the described approach has been used as a classier. This overall originates an HMM-based framework for supporting accurate classication of music datasets. On a dataset of String Quartet movements, we obtained an average composer classication accuracy of more than 96%. As regards instrumentation classication, we obtained an average classication of slightly less than 100% for Piano, Orchestra and String Quartet. In this paper, the most signicant results coming from our experimental assessment and analysis are reported and discussed in detail.openCuzzocrea, Alfredo; Mumolo, Enzo; Vercelli, GianniCuzzocrea, Alfredo; Mumolo, Enzo; Vercelli, Giann
    • …
    corecore