713 research outputs found

    Stimulated training for automatic speech recognition and keyword search in limited resource conditions

    Get PDF
    © 2017 IEEE. Training neural network acoustic models on limited quantities of data is a challenging task. A number of techniques have been proposed to improve generalisation. This paper investigates one such technique called stimulated training. It enables standard criteria such as cross-entropy to enforce spatial constraints on activations originating from different units. Having different regions being active depending on the input unit may help network to discriminate better and as a consequence yield lower error rates. This paper investigates stimulated training for automatic speech recognition of a number of languages representing different families, alphabets, phone sets and vocabulary sizes. In particular, it looks at ensembles of stimulated networks to ensure that improved generalisation will withstand system combination effects. In order to assess stimulated training beyond 1-best transcription accuracy, this paper looks at keyword search as a proxy for assessing quality of lattices. Experiments are conducted on IARPA Babel program languages including the surprise language of OpenKWS 2016 competition

    Improving Interpretability and Regularization in Deep Learning

    Get PDF
    IEEE Deep learning approaches yield state-of-the-art performance in a range of tasks, including automatic speech recognition. However, the highly distributed representation in a deep neural network (DNN) or other network variations are difficult to analyse, making further parameter interpretation and regularisation challenging. This paper presents a regularisation scheme acting on the activation function output to improve the network interpretability and regularisation. The proposed approach, referred to as activation regularisation, encourages activation function outputs to satisfy a target pattern. By defining appropriate target patterns, different learning concepts can be imposed on the network. This method can aid network interpretability and also has the potential to reduce over-fitting. The scheme is evaluated on several continuous speech recognition tasks: the Wall Street Journal continuous speech recognition task, eight conversational telephone speech tasks from the IARPA Babel program and a U.S. English broadcast news task. On all the tasks, the activation regularisation achieved consistent performance gains over the standard DNN baselines

    Low Energy States of 3181Ga50^{81}_{31} Ga_{50} : Elements on the Doubly-Magic Nature of 78^{78}Ni

    Get PDF
    Excited levels were attributed to 3181^{81}_{31}Ga50_{50} for the first time which were fed in the β\beta-decay of its mother nucleus 81^{81}Zn produced in the fission of nat^{nat}U using the ISOL technique. We show that the structure of this nucleus is consistent with that of the less exotic proton-deficient N=50 isotones within the assumption of strong proton Z=28 and neutron N=50 effective shell effects.Comment: 4 pages, REVTeX 4, 5 figures (eps format

    First Description of KPC-2-Producing Klebsiella oxytoca in Brazil

    Get PDF
    The present work reports the detection of the first case of nosocomial Klebsiella oxytoca producing class A carbapenemase KPC-2 in Brazil. the isolate KPN106 carried a 65-kb IncW-type plasmid that harbors the bla(KPC) gene and Tn4401b. Moreover, we detected the presence of a class 1 integron containing a new allele, arr-8, followed by a 5'-truncated dhfrIIIc gene. in view of the recent results, we emphasize the high variability of the bacterial and genetic hosts of this resistance determinant.Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)FACEPEPFA/UPEUniv Pernambuco, Inst Ciencias Biol, Lab Resistencia Microbiana, Recife, PE, BrazilUniv Fed Pernambuco, Dept Genet, Lab Genet Microrganismos, Recife, PE, BrazilUniversidade Federal de São Paulo, Lab Alerta, São Paulo, BrazilCPqAM Fiocruz, Ctr Pesquisa Aggeu Magalhaes, Recife, PE, BrazilUniversidade Federal de São Paulo, Lab Alerta, São Paulo, BrazilWeb of Scienc

    Successive loadings of reactant in the hydrogen generation by hydrolysis of sodium borohydride in batch reactors

    Get PDF
    In this paper, for the first time, an experimental investigation is presented of five successive loadings of reactant alkaline solution of sodium borohydride (NaBH4) for hydrogen generation, using an improved nickel-based powder catalyst, under uncontrolled ambient conditions. The experiments were performed in two batch reactors with internal volumes of 0.646 l and of 0.369 l. The compressed hydrogen generated, at pressures below hydrogen critical pressure, gives emphasis on the importance of considering solubility effects during reaction, leading to storage of hydrogen in the liquid phase inside the reactor. The present work suggests that the sodium metaborate by-product formed by the alkaline hydrolysis of NaBH4, in a closed pressure vessel without temperature control, is NaBO2.xH2O, with x ≥ 2. The data obtained in this work lends credit to x ≈ 2, which was discussed based on the XRD results, and this call for increased caution in the definition of the hydrolysis reaction of NaBH4 up to temperatures of 333 K and up to pressures of 0.13 MP

    Improving interpretability and regularization in deep learning

    Get PDF
    Deep learning approaches yield state-of-the-art performance in a range of tasks, including automatic speech recognition. However, the highly distributed representation in a deep neural network (DNN) or other network variations is difficult to analyze, making further parameter interpretation and regularization challenging. This paper presents a regularization scheme acting on the activation function output to improve the network interpretability and regularization. The proposed approach, referred to as activation regularization, encourages activation function outputs to satisfy a target pattern. By defining appropriate target patterns, different learning concepts can be imposed on the network. This method can aid network interpretability and also has the potential to reduce overfitting. The scheme is evaluated on several continuous speech recognition tasks: the Wall Street Journal continuous speech recognition task, eight conversational telephone speech tasks from the IARPA Babel program and a U.S. English broadcast news task. On all the tasks, the activation regularization achieved consistent performance gains over the standard DNN baselines
    corecore