13 research outputs found

    Multitask Learning of Context-Dependent Targets in Deep Neural Network Acoustic Models

    Get PDF

    Context Aware Sensor Collaboration for Intelligent Wireless Communications an AI Approach to Moving Sensor Managemet

    Get PDF
    Collaborative sensor data service is an emerging technology and it is beneficial to various applications including robotics, medicals, industry and military. Sensor collaborations improve technical difficultiesonthe verification and validation of sensor data or reduction of wireless sensor data transmission. However, typical approaches to sensor collaborations are less satisfactory. It is in part because the sensor calibrations are pre-fixed and therefore they are less adaptive to dynamic changes in environment. It is also because sensors are calibrated one time before deployment, and their collaborations do not take into consideration the dynamic movement of sensors. Their calibrations are not satisfactorily adaptive to environmental changes, or their collaborations are less efficient to cope with abrupt presence/absence of sensors. This paper proposes a two-tier deep learning technique to enable sensor devices to be adaptive and moving sensors to be collaborative. The contribution of this paper is an intelligent identification of environment changes and intelligent rearrangement of wireless sensor network

    MISPRONUNCIATION DETECTION AND DIAGNOSIS IN MANDARIN ACCENTED ENGLISH SPEECH

    Get PDF
    This work presents the development, implementation, and evaluation of a Mispronunciation Detection and Diagnosis (MDD) system, with application to pronunciation evaluation of Mandarin-accented English speech. A comprehensive detection and diagnosis of errors in the Electromagnetic Articulography corpus of Mandarin-Accented English (EMA-MAE) was performed by using the expert phonetic transcripts and an Automatic Speech Recognition (ASR) system. Articulatory features derived from the parallel kinematic data available in the EMA-MAE corpus were used to identify the most significant articulatory error patterns seen in L2 speakers during common mispronunciations. Using both acoustic and articulatory information, an ASR based Mispronunciation Detection and Diagnosis (MDD) system was built and evaluated across different feature combinations and Deep Neural Network (DNN) architectures. The MDD system captured mispronunciation errors with a detection accuracy of 82.4%, a diagnostic accuracy of 75.8% and a false rejection rate of 17.2%. The results demonstrate the advantage of using articulatory features in revealing the significant contributors of mispronunciation as well as improving the performance of MDD systems

    14th SC@RUG 2017 proceedings 2016-2017

    Get PDF

    14th SC@RUG 2017 proceedings 2016-2017

    Get PDF
    corecore