104 research outputs found

    Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web

    Get PDF
    Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C’s Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers’ observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is soun

    Proceedings of the 1st EICS Workshop on Engineering Interactive Computer Systems with SCXML

    Get PDF

    Proceedings of the 2nd EICS Workshop on Engineering Interactive Computer Systems with SCXML

    Get PDF

    Improving Speech Interaction in Vehicles Using Context-Aware Information through A SCXML Framework

    Get PDF
    Speech Technologies can provide important benefits for the development of more usable and safe in-vehicle human-machine interactive systems (HMIs). However mainly due robustness issues, the use of spoken interaction can entail important distractions to the driver. In this challenging scenario, while speech technologies are evolving, further research is necessary to explore how they can be complemented with both other modalities (multimodality) and information from the increasing number of available sensors (context-awareness). The perceived quality of speech technologies can significantly be increased by implementing such policies, which simply try to make the best use of all the available resources; and the in vehicle scenario is an excellent test-bed for this kind of initiatives. In this contribution we propose an event-based HMI design framework which combines context modelling and multimodal interaction using a W3C XML language known as SCXML. SCXML provides a general process control mechanism that is being considered by W3C to improve both voice interaction (VoiceXML) and multimodal interaction (MMI). In our approach we try to anticipate and extend these initiatives presenting a flexible SCXML-based approach for the design of a wide range of multimodal context-aware HMI in-vehicle interfaces. The proposed framework for HMI design and specification has been implemented in an automotive OSGi service platform, and it is being used and tested in the Spanish research project MARTA for the development of several in-vehicle interactive applications

    Modeling and Formal Verification of Smart Environments

    Get PDF
    Smart Environments (SmE) are a growing combination of various computing frameworks (ubiquitous, pervasive etc), devices, control algorithms and a complex web of interactions. It is at the core of user facilitation in a number of industrial, domestic and public areas. Based on their application areas, SmE may be critical in terms of correctness, reliability, safety, security etc. To achieve error-free and requirement-compliant implementation, these systems are designed resorting to various modeling approaches including Ontology and Statecharts. This paper attempts to consider correctness, reliability, safety and security in the design process of SmE and its related components by proposing a design time modeling and formal verification methodology. The proposed methodology covers various design features related to modeling and formal verification SmE (focusing on users, devices, environment, control algorithms and their interaction) against the set of the requirements through model checking. A realistic case study of a Bank Door Security Booth System (BDSB) is tested. The results show the successful verification of the properties related to the safety, security and desired reliable behavior of BDSB

    Welcome to EICS 2015

    Get PDF
    (undefined)info:eu-repo/semantics/publishedVersio

    Welcome to EICS 2016

    Get PDF
    [Extract] The ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS) is a yearly international conference devoted to engineering usable and reliable interactive computing systems. Research presented at EICS revolves around methods, processes, techniques and tools that support specifying, designing, developing, deploying and verifying interactive systems. This 8th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS'16) took place in Brussels, Belgium (21-24 June 2016) – at the heart of Europe...info:eu-repo/semantics/publishedVersio

    Towards the integration of user interface prototyping and model-based development

    Get PDF
    The main objective of this paper is to make a contribution in the automation of web applications’ development, starting from prototypes of their graphical user interfaces. Due to the exponential increase in the use of internet-based services and applications, there is an also increasing demand for Web designers and developers. At the same time, the proliferation of languages, frameworks and libraries illustrates the current state of immaturity of web development technologies. This state of affairs creates difficulties in the development and maintenance of Web applications. In this paper, we argue that integrating concepts of modelbased user interface development with the more traditional usercentred design approach to development can provide an answer to this situation. An approach is presented that allows designers to use prototyping tools, in this case Adobe XD, to design graphical interfaces, and then automatically converts them to (Vue.js + Bootstrap) code, thus creating a first version of the implementation for further development. This is done through the interpretation of the SVG file that Adobe XD exports.FCT -Fundação para a CiĂȘncia e a Tecnologia(UIDB/50014/2020

    A Dynamic Platform for Developing 3D Facial Avatars in a Networked Virtual Environment

    Get PDF
    Avatar facial expression and animation in 3D collaborative virtual environment (CVE) systems are reconstructed through a complex manipulation of muscles, bones, and wrinkles in 3D space. The need for a fast and easy reconstruction approach has emerged in the recent years due to its application in various domains: 3D disaster management, virtual shopping, and military training. In this work we proposed a new script language based on atomic parametric action to easily produce real-time facial animation. To minimize use of the game engine, we introduced script-based component where the user introduces simple short script fragments to feed the engine with a new animation on the fly. During runtime, when an embedded animation is required, an xml file is created and injected into the game engine without stopping or restarting the engine. The resulting animation method preserves the real-time performance because the modification occurs not through the modification of the 3D code that describes the CVE and its objects but rather through modification of the action scenario that rules when an animation happens or might happen in that specific situation
    • 

    corecore