70 research outputs found

    Fully Automated Translation of BoxTalk to Promela

    Get PDF
    Telecommunication systems are structured to enable incremental growth, so that new telecommunication features can be added to the set of existing features. With the addition of more features, certain existing features may exhibit unpredictable behaviour. This is known as the feature interaction problem, and it is very old problem in telecommunication systems. Jackson and Zave have proposed a technology, Distributed Feature Composition (DFC) to manage the feature interaction problem. DFC is a pipe-and-filter-like architecture where features are "filters" and communication channels connecting features are "pipes". DFC does not prescribe how features are specified or programmed. Instead, Zave and Jackson have developed BoxTalk, a call-abstraction, domain-specific, high-level programming language for programming features. BoxTalk is based on the DFC protocol and it uses macros to combine common sequences of read and write actions, thus simplifying the details of the DFC protocol in feature models. BoxTalk features must adhere to the DFC protocol in order to be plugged into a DFC architecture (i.e., features must be "DFC compliant"). We want to use model checking to check whether a feature is DFC compliant. We express DFC compliance using a set of properties expressed as linear temporal logic formulas. To use the model checker SPIN, BoxTalk features must be translated into Promela. Our automatic verification process comprises three steps: 1. Explicate BoxTalk features by expanding macros and introducing implicit details. 2. Mechanically translate explicated BoxTalk features into Promela models. 3. Verify the Promela models of features using the SPIN model checker. We present a case study of BoxTalk features, describing the original features and how they are explicated and translated into Promela by our software, and how they are proven to be DFC compliant

    Phone-based cepstral polynomial SVM system for speaker recognition,” in

    Get PDF
    Abstract We have been using a phone-based cepstral system with polynomial features in NIST evaluations for the past two years. This system uses three broad phone classes, three states per class, and third-order polynomial features obtained from MFCC features. In this paper, we present a complete analysis of the system. We start from a simpler system that does not use phones or states and show that the addition of phones gives a significant improvement. We show that adding state information does not provide improvement on its own but provides a significant improvement when used with phone classes. We complete the system by applying nuisance attribute projection (NAP) and score normalization. We show that splitting features after a joint NAP over all phone classes results in a significant improvement. Overall, we obtain about 25% performance improvement with polynomial features based on phones and states, and obtain a system with performance comparable to a state-of-the-art SVM system

    Multi-task Learning for Speaker Verification and Voice Trigger Detection

    Full text link
    Automatic speech transcription and speaker recognition are usually treated as separate tasks even though they are interdependent. In this study, we investigate training a single network to perform both tasks jointly. We train the network in a supervised multi-task learning setup, where the speech transcription branch of the network is trained to minimise a phonetic connectionist temporal classification (CTC) loss while the speaker recognition branch of the network is trained to label the input sequence with the correct label for the speaker. We present a large-scale empirical study where the model is trained using several thousand hours of labelled training data for each task. We evaluate the speech transcription branch of the network on a voice trigger detection task while the speaker recognition branch is evaluated on a speaker verification task. Results demonstrate that the network is able to encode both phonetic \emph{and} speaker information in its learnt representations while yielding accuracies at least as good as the baseline models for each task, with the same number of parameters as the independent models

    IONS (ANURADHA): Ionization states of low energy cosmic rays

    Get PDF
    IONS (ANURADHA), the experimental payload designed specifically to determine the ionization states, flux, composition, energy spectra and arrival directions of low energy (10 to 100 MeV/amu) anomalous cosmic ray ions of helium to iron in near-Earth space, had a highly successful flight and operation Spacelab-3 mission. The experiment combines the accuracy of a highly sensitive CR-39 nuclear track detector with active components included in the payload to achieve the experimental objectives. Post-flight analysis of detector calibration pieces placed within the payload indicated no measurable changes in detector response due to its exposure in spacelab environment. Nuclear tracks produced by alpha-particles, oxygen group and Fe ions in low energy anomalous cosmic rays were identified. It is calculated that the main detector has recorded high quality events of about 10,000 alpha-particles and similar number of oxygen group and heavier ions of low energy cosmic rays

    Combining Prosodic, Lexical and Cepstral Systems for Deceptive Speech Detection

    Get PDF
    We report on machine learning experiments to distinguish deceptive from non-deceptive speech in the Columbia-SRI-Colorado (CSC) corpus. Specifically, we propose a system combination approach using different models and features for deception detection. Scores from an SVM system based on prosodic/lexical features are combined with scores from a Gaussian mixture model system based on acoustic features, resulting in improved accuracy over the individual systems. Finally, we compare results from the prosodic-only SVM system using features derived either from recognized words or from human transcriptions

    Combination strategies for a factor analysis phone-conditioned speaker verification system

    Get PDF
    This work aims to take advantage of recent developments in joint factor analysis (JFA) in the context of a phonetically conditioned GMM speaker verification system. Previous work has shown performance advantages through phonetic conditioning, but this has not been shown to date with the JFA framework. Our focus is particularly on strategies for combining the phone-conditioned systems. We show that the classic fusion of the scores is suboptimal when using multiple GMM systems. We investigate several combination strategies in the model space, and demonstrate improvement over score-level combination as well as over a non-phonetic baseline system. This work was conducted during the 2008 CLSP Workshop at Johns Hopkins University
    corecore