26 research outputs found

    Using hand gestures to control mobile spoken dialogue systems

    Get PDF
    Speech and hand gestures offer the most natural modalities for everyday human-to-human interaction. The availability of diverse spoken dialogue applications and the proliferation of accelerometers on consumer electronics allow the introduction of new interaction paradigms based on speech and gestures. Little attention has been paid, however, to the manipulation of spoken dialogue systems (SDS) through gestures. Situation-induced disabilities or real disabilities are determinant factors that motivate this type of interaction. In this paper, six concise and intuitively meaningful gestures are proposed that can be used to trigger the commands in any SDS. Using different machine learning techniques, a classification error for the gesture patterns of less than 5% is achieved, and the proposed set of gestures is compared to ones proposed by users. Examining the social acceptability of the specific interaction scheme, high levels of acceptance for public use are encountered. An experiment was conducted comparing a button-enabled and a gesture-enabled interface, which showed that the latter imposes little additional mental and physical effort. Finally, results are provided after recruiting a male subject with spastic cerebral palsy, a blind female user, and an elderly female person

    An Open Platform That Allows Non-Expert Users to Build and Deploy Speech-Enabled Online CALL Courses (demo description)

    Get PDF
    Abstract We demonstrate Open CALL-SLT, a framework which allows non-experts to design, implement and deploy online speechenabled CALL courses. The demo accompanies two long papers [1, 2] also appearing at the SLaTE 2015 workshop, which describe the platform in detail

    Extracting Sentence Simplification Pairs from French Comparable Corpora Using a Two-Step Filtering Method

    No full text
    Automatic Text Simplification (ATS) aims at simplifying texts by reducing their linguistic complexity albeit retaining their meaning. While being an interesting task from a societal and computational perspective, the lack of monolingual parallel data prevents an agile implementation of ATS models, especially in less resource-rich languages than English. For these reasons, this paper investigates how to create a general-language parallel simplification dataset for French using a method to extract complex-simple sentence pairs from comparable corpora like Wikipedia and its simplified counterpart, Vikidia. By using a two-step automatic filtering process, we sequentially address the two primary conditions that must be satisfied for a simplified sentence to be considered valid: i) preservation of the original meaning, and ii) simplicity gain with respect to the source text. Using this approach, we provide a dataset of parallel sentence simplifications (WiViCo) that can be later used for training French sequence-to-sequence general-language ATS models

    Examining the Effects of Rephrasing User Input on Two Mobile Spoken Language Systems

    No full text
    In this work we investigate the effects of rephrasing the user’s input on two mobile spoken dialogue systems. We argue that for specific kinds of applications it’s important to confirm the understanding of the system before obtaining the output. In this way the user can avoid misconceptions and problems occurring in the dialogue flow and he can enhance his confidence in the system. Nevertheless this has an impact on the interaction, as the mental workload increases, and the user’s behavior may adapt to the system’s coverage. We will focus on two applications that implement the notion of rephrasing user’s input in a different way. Our study took place among 14 subjects that used both systems on a Nokia N810 Internet Tablet. 1
    corecore