58 research outputs found

    Programming Language Techniques for Natural Language Applications

    Get PDF
    It is easy to imagine machines that can communicate in natural language. Constructing such machines is more difficult. The aim of this thesis is to demonstrate how declarative grammar formalisms that distinguish between abstract and concrete syntax make it easier to develop natural language applications. We describe how the type-theorectical grammar formalism Grammatical Framework (GF) can be used as a high-level language for natural language applications. By taking advantage of techniques from the field of programming language implementation, we can use GF grammars to perform portable and efficient parsing and linearization, generate speech recognition language models, implement multimodal fusion and fission, generate support code for abstract syntax transformations, generate dialogue managers, and implement speech translators and web-based syntax-aware editors. By generating application components from a declarative grammar, we can reduce duplicated work, ensure consistency, make it easier to build multilingual systems, improve linguistic quality, enable re-use across system domains, and make systems more portable

    From Metamodeling to Automatic Generation of Multimodal Interfaces for Ambient Computing

    Get PDF
    International audiencehis paper presents our approach to design multichannel and multimodal applications as part of ambient intelligence. Computers are increasingly present in our environments, whether at work (computers, photocopiers), at home (video player, hi-fi, microwave), in our cars, etc. They are more adaptable and context-sensitive (e.g., the car radio that lowers the volume when the mobile phone rings). Unfortunately, while they should provide smart services by combining their skills, they are not yet designed to communicate together. Our results, mainly based on the use of a software bus and a workflow, show that different devices (such as Wiimote, multi-touch screen, telephone, etc.) can be coordinated in order to activate real things (such as lamp, fan, robot, webcam, etc.). A smart digital home case study illustrates how using our approach to design with ease some parts of the ambient system and to redesign them during runtime

    Flexible context aware interface for ambient assisted living

    Get PDF
    A Multi Agent System that provides a (cared for) person, the subject, with assistance and support through an Ambient Assisted Living Flexible Interface (AALFI) during the day while complementing the night time assistance offered by NOCTURNAL with feedback assistance, is presented. It has been tailored to the subject’s requirements profile and takes into account factors associated with the time of day; hence it attempts to overcome shortcomings of current Ambient Assisted Living Systems. The subject is provided with feedback that highlights important criteria such as quality of sleep during the night and possible breeches of safety during the day. This may help the subject carry out corrective measures and/or seek further assistance. AALFI provides tailored interaction that is either visual or auditory so that the subject is able to understand the interactions and this process is driven by a Multi-Agent System. User feedback gathered from a relevant user group through a workshop validated the ideas underpinning the research, the Multi-agent system and the adaptable interface

    Flexible context aware interface for ambient assisted living

    Get PDF
    A Multi Agent System that provides a (cared for) person, the subject, with assistance and support through an Ambient Assisted Living Flexible Interface (AALFI) during the day while complementing the night time assistance offered by NOCTURNAL with feedback assistance, is presented. It has been tailored to the subject’s requirements profile and takes into account factors associated with the time of day; hence it attempts to overcome shortcomings of current Ambient Assisted Living Systems. The subject is provided with feedback that highlights important criteria such as quality of sleep during the night and possible breeches of safety during the day. This may help the subject carry out corrective measures and/or seek further assistance. AALFI provides tailored interaction that is either visual or auditory so that the subject is able to understand the interactions and this process is driven by a Multi-Agent System. User feedback gathered from a relevant user group through a workshop validated the ideas underpinning the research, the Multi-agent system and the adaptable interface

    FRAMEWORK AND IMPLEMENTATION FOR DIALOG BASED ARABIC SPEECH RECOGNITION

    Get PDF

    Proceedings of the 2nd EICS Workshop on Engineering Interactive Computer Systems with SCXML

    Get PDF

    A multimodal emotion detection system during human-robot interaction

    Get PDF
    In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human-robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human-robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately.The authors gratefully acknowledge the funds provided by the Spanish MICINN (Ministry of Science and Innovation) through the project “Aplicaciones de los robots sociales”, DPI2011-26980 from the Spanish Ministry of Economy and Competitiveness. Moreover, the research leading to these results has received funding from the RoboCity2030-II-CM project (S2009/DPI-1559), funded by Programas de Actividades I+D en la Comunidad de Madrid and cofunded by Structural Funds of the EU
    corecore