148 research outputs found

    Bridging the Gap between a Behavioural Formal Description Technique and User Interface description language: Enhancing ICO with a Graphical User Interface markup language

    Get PDF
    International audienceIn the last years, User Interface Description Languages (UIDLs) appeared as a suitable solution for developing interactive systems. In order to implement reliable and efficient applications, we propose to employ a formal description technique called ICO (Interactive Cooperative Object) that has been developed to cope with complex behaviours of interactive systems including event-based and multimodal interactions. So far, ICO is able to describe most of the parts of an interactive system, from functional core concerns to fine grain interaction techniques, but, even if it addresses parts of the rendering, it still not has means to describe the effective rendering of such interactive system. This paper presents a solution to overcome this gap using markup languages. A first technique is based on the Java technology called JavaFX and a second technique is based on the emergent UsiXML language for describing user interface components for multi-target platforms. The proposed approach offers a bridge between markup language based descriptions of the user interface components and a robust technique for describing behaviour using ICO modelling. Furthermore, this paper highlights how it is possible to take advantage from both behavioural and markup language description techniques to propose a new model-based approach for prototyping interactive systems. The proposed approach is fully illustrated by a case study using an interactive application embedded into interactive aircraft cockpits

    Fusion multimodale pour les systèmes d'interaction

    Get PDF
    Les chercheurs en informatique et en génie informatique consacrent une partie importante de leurs efforts sur la communication et l'interaction entre l'homme et la machine. En effet, avec l'avènement du traitement multimodal et du multimédia en temps réel, l'ordinateur n'est plus considéré seulement comme un outil de calcul, mais comme une machine de traitement, de communication, de collection et de contrôle, une machine qui accompagne, aide et favorise de nombreuses activités dans la vie quotidienne. Une interface multimodale permet une interaction plus flexible et naturelle entre l’homme et la machine, en augmentant la capacité des systèmes multimodaux pour une meilleure correspondance avec les besoin de l’homme. Dans ce type d’interaction, un moteur de fusion est un composant fondamental qui interprète plusieurs sources de communications, comme les commandes vocales, les gestes, le stylet, etc. ce qui rend l’interaction homme-machine plus riche et plus efficace. Notre projet de recherche permettra une meilleure compréhension de la fusion et de l'interaction multimodale, par la construction d'un moteur de fusion en utilisant des technologies de Web sémantique. L'objectif est de développer un système expert pour l'interaction multimodale personne-machine qui mènera à la conception d'un outil de surveillance pour personnes âgées, afin de leurs assurer une aide et une confiance en soi, à domicile comme à l'extérieur

    User Adaptive and Context-Aware Smart Home Using Pervasive and Semantic Technologies

    Get PDF

    Designing Embodied Interactive Software Agents for E-Learning: Principles, Components, and Roles

    Get PDF
    Embodied interactive software agents are complex autonomous, adaptive, and social software systems with a digital embodiment that enables them to act on and react to other entities (users, objects, and other agents) in their environment through bodily actions, which include the use of verbal and non-verbal communicative behaviors in face-to-face interactions with the user. These agents have been developed for various roles in different application domains, in which they perform tasks that have been assigned to them by their developers or delegated to them by their users or by other agents. In computer-assisted learning, embodied interactive pedagogical software agents have the general task to promote human learning by working with students (and other agents) in computer-based learning environments, among them e-learning platforms based on Internet technologies, such as the Virtual Linguistics Campus (www.linguistics-online.com). In these environments, pedagogical agents provide contextualized, qualified, personalized, and timely assistance, cooperation, instruction, motivation, and services for both individual learners and groups of learners. This thesis develops a comprehensive, multidisciplinary, and user-oriented view of the design of embodied interactive pedagogical software agents, which integrates theoretical and practical insights from various academic and other fields. The research intends to contribute to the scientific understanding of issues, methods, theories, and technologies that are involved in the design, implementation, and evaluation of embodied interactive software agents for different roles in e-learning and other areas. For developers, the thesis provides sixteen basic principles (Added Value, Perceptible Qualities, Balanced Design, Coherence, Consistency, Completeness, Comprehensibility, Individuality, Variability, Communicative Ability, Modularity, Teamwork, Participatory Design, Role Awareness, Cultural Awareness, and Relationship Building) plus a large number of specific guidelines for the design of embodied interactive software agents and their components. Furthermore, it offers critical reviews of theories, concepts, approaches, and technologies from different areas and disciplines that are relevant to agent design. Finally, it discusses three pedagogical agent roles (virtual native speaker, coach, and peer) in the scenario of the linguistic fieldwork classes on the Virtual Linguistics Campus and presents detailed considerations for the design of an agent for one of these roles (the virtual native speaker)

    Integrated multimodal interaction framework for virtual reality foot reflexology stress therapy

    Get PDF
    Frameworks in interaction research have seen varying compositions from numerous researchers, and have been applied for either a specific or general purposes in several domains. Previous studies have highlighted virtual reality (VR) in stress therapy, and revealed the potential of foot reflexology therapy using VR technology. However, the interaction framework for foot reflexology through virtual reality requires further investigation. This study presents the design and evaluation of an integrated multimodal interaction framework for virtual reality foot reflexology stress therapy. The components of the proposed framework were identified from the literature review and previous research, which included design principles, technology, structural components, multimodal interaction architecture, and segment composition. This formed the proposed integrated multimodal interaction framework for virtual reality foot reflexology stress therapy. The proposed framework was then validated using expert reviews. This was followed by prototype development, which explored the effectiveness of the virtual reality foot reflexology therapy application on relaxation and stress relief using Smith Relaxation States Inventory (SRSI-3). A pre and post-test intervention quasi experiment was employed in the study for the evaluation. The findings revealed that Virtual Reality Foot Reflexology Stress Therapy (VR–FRST) effectively evokes the relaxation state categories of transcendence, mindfulness, positive energy, and basic relaxation, and also reduces users stress state. This research provides a concise, organized, practical and validated integrated multimodal interaction framework for the design and development of foot reflexology therapy in a virtual environment. This contributes to the field of interaction design for virtual reality developers and complementary therapy for the alternative medical practitioners

    Multimodal Content Delivery for Geo-services

    Get PDF
    This thesis describes a body of work carried out over several research projects in the area of multimodal interaction for location-based services. Research in this area has progressed from using simulated mobile environments to demonstrate the visual modality, to the ubiquitous delivery of rich media using multimodal interfaces (geo- services). To effectively deliver these services, research focused on innovative solutions to real-world problems in a number of disciplines including geo-location, mobile spatial interaction, location-based services, rich media interfaces and auditory user interfaces. My original contributions to knowledge are made in the areas of multimodal interaction underpinned by advances in geo-location technology and supported by the proliferation of mobile device technology into modern life. Accurate positioning is a known problem for location-based services, contributions in the area of mobile positioning demonstrate a hybrid positioning technology for mobile devices that uses terrestrial beacons to trilaterate position. Information overload is an active concern for location-based applications that struggle to manage large amounts of data, contributions in the area of egocentric visibility that filter data based on field-of-view demonstrate novel forms of multimodal input. One of the more pertinent characteristics of these applications is the delivery or output modality employed (auditory, visual or tactile). Further contributions in the area of multimodal content delivery are made, where multiple modalities are used to deliver information using graphical user interfaces, tactile interfaces and more notably auditory user interfaces. It is demonstrated how a combination of these interfaces can be used to synergistically deliver context sensitive rich media to users - in a responsive way - based on usage scenarios that consider the affordance of the device, the geographical position and bearing of the device and also the location of the device

    Tools in and out of sight : an analysis informed by Cultural-Historical Activity Theory of audio-haptic activities involving people with visual impairments supported by technology

    Get PDF
    The main purpose of this thesis is to present a Cultural-Historical Activity Theory (CHAT) based analysis of the activities conducted by and with visually impaired users supported by audio-haptic technology.This thesis covers several studies conducted in two projects. The studies evaluate the use of audio-haptic technologies to support and/or mediate the activities of people with visual impairment. The focus is on the activities involving access to two-dimensional information, such as pictures or maps. People with visual impairments can use commercially available solutions to explore static information (raised lined maps and pictures, for example). Solu-tions for dynamic access, such as drawing a picture or using a map while moving around, are more scarce. Two distinct projects were initiated to remedy the scarcity of dynamic access solutions, specifically focusing on two separate activities.The first project, HaptiMap, focused on pedestrian outdoors navigation through audio feedback and gestures mediated by a GPS equipped mobile phone. The second project, HIPP, focused on drawing and learning about 2D representations in a school setting with the help of haptic and audio feedback. In both cases, visual feedback was also present in the technology, enabling people with vision to take advantage of that modality too.The research questions addressed are: How can audio and haptic interaction mediate activ-ities for people with visual impairment? Are there features of the programming that help or hinder this mediation? How can CHAT, and specifically the Activity Checklist, be used to shape the design process, when designing audio haptic technology together with persons with visual impairments?Results show the usefulness of the Activity Checklist as a tool in the design process, and provide practical application examples. A general conclusion emphasises the importance of modularity, standards, and libre software in rehabilitation technology to support the development of the activities over time and to let the code evolve with them, as a lifelong iterative development process. The research also provides specific design recommendations for the design of the type of audio haptic systems involved

    KomBInoS - Modellgetriebene Entwicklung von multimodalen Dialogschnittstellen fĂĽr Smart Services

    Get PDF
    Diese Arbeit ist angesiedelt im Kontext der drei Forschungsgebiete Smart Service Welt, Modellgetriebene Softwareentwicklung und Intelligente Benutzerschnittstellen. Das Ziel der Arbeit war die Entwicklung eines ganzheitlichen Ansatzes zur effizienten Erstellung von multimodalen Dialogschnittstellen für Smart Services. Um dieses Ziel zu erreichen, wurde mit KomBInoS ein umfassendes Rahmenwerk zur modellgetriebenen Erstellung solcher Benutzerschnittstellen entwickelt. Das Rahmenwerk besteht aus: (1) einer Metamodell-Architektur, welche sowohl eine modellgetriebene Entwicklung als auch die Komposition von multimodalen Dialogschnittstellen für Smart Services erlaubt, (2) einem methodischen Vorgehen, welches aus aufeinander abgestimmten Modelltransformationen, möglichen Kompositionsschritten und manuellen Entwicklungstätigkeiten besteht, sowie (3) einer integrierten Werkzeugkette als Implementierung der Methode. Es wurde außerdem eine cloud-fähige Laufzeitumgebung zur mobilen Nutzung der so erstellten Benutzerschnittstellen entwickelt. Als Proof-of-Concept werden acht Beispielanwendungen und Demonstratoren aus fünf Forschungsprojekten vorgestellt. Zusätzlich zur Smart Service Welt fand und findet KomBInoS auch Anwendung im Bereich der Industrie 4.0.This work is located in the context of the three research areas Smart Service World, Model-Driven Software Development and Intelligent User Interfaces. The aim of the work was to develop a holistic approach for the efficient creation of multimodal dialogue interfaces for Smart Services. To achieve this goal, KomBInoS was developed as a comprehensive framework for the model-driven creation of such user interfaces. The framework consists of: (1) a metamodel architecture that allows both model-driven development and the composition of multimodal dialogue interfaces for Smart Services, (2) a methodical approach consisting of coordinated model transformations, possible compositional steps and manual development activities, as well as (3) an integrated tool chain as an implementation of the method. Furthermore, a cloud-enabled runtime environment was developed for mobile use of the user interfaces created in this way. As proof-of-concept, eight sample applications and demonstrators from five research projects will be presented. In addition to the Smart Service Welt, KomBInoS was and is also used in the field of industry 4.0
    • …
    corecore