102 research outputs found

    Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web

    Get PDF
    Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C’s Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers’ observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is soun

    Proceedings of the 2nd EICS Workshop on Engineering Interactive Computer Systems with SCXML

    Get PDF

    Improving Speech Interaction in Vehicles Using Context-Aware Information through A SCXML Framework

    Get PDF
    Speech Technologies can provide important benefits for the development of more usable and safe in-vehicle human-machine interactive systems (HMIs). However mainly due robustness issues, the use of spoken interaction can entail important distractions to the driver. In this challenging scenario, while speech technologies are evolving, further research is necessary to explore how they can be complemented with both other modalities (multimodality) and information from the increasing number of available sensors (context-awareness). The perceived quality of speech technologies can significantly be increased by implementing such policies, which simply try to make the best use of all the available resources; and the in vehicle scenario is an excellent test-bed for this kind of initiatives. In this contribution we propose an event-based HMI design framework which combines context modelling and multimodal interaction using a W3C XML language known as SCXML. SCXML provides a general process control mechanism that is being considered by W3C to improve both voice interaction (VoiceXML) and multimodal interaction (MMI). In our approach we try to anticipate and extend these initiatives presenting a flexible SCXML-based approach for the design of a wide range of multimodal context-aware HMI in-vehicle interfaces. The proposed framework for HMI design and specification has been implemented in an automotive OSGi service platform, and it is being used and tested in the Spanish research project MARTA for the development of several in-vehicle interactive applications

    Using SCXML to integrate semantic sensor information into context-aware user interfaces

    Get PDF
    This paper describes a novel architecture to introduce automatic annotation and processing of semantic sensor data within context-aware applications. Based on the well-known state-charts technologies, and represented using W3C SCXML language combined with Semantic Web technologies, our architecture is able to provide enriched higher-level semantic representations of user’s context. This capability to detect and model relevant user situations allows a seamless modeling of the actual interaction situation, which can be integrated during the design of multimodal user interfaces (also based on SCXML) for them to be adequately adapted. Therefore, the final result of this contribution can be described as a flexible context-aware SCXML-based architecture, suitable for both designing a wide range of multimodal context-aware user interfaces, and implementing the automatic enrichment of sensor data, making it available to the entire Semantic Sensor We

    Multimodal Generic Framework for Multimedia Documents Adaptation

    Get PDF
    Today, people are increasingly capable of creating and sharing documents (which generally are multimedia oriented) via the internet. These multimedia documents can be accessed at anytime and anywhere (city, home, etc.) on a wide variety of devices, such as laptops, tablets and smartphones. The heterogeneity of devices and user preferences has raised a serious issue for multimedia contents adaptation. Our research focuses on multimedia documents adaptation with a strong focus on interaction with users and exploration of multimodality. We propose a multimodal framework for adapting multimedia documents based on a distributed implementation of W3C’s Multimodal Architecture and Interfaces applied to ubiquitous computing. The core of our proposed architecture is the presence of a smart interaction manager that accepts context related information from sensors in the environment as well as from other sources, including information available on the web and multimodal user inputs. The interaction manager integrates and reasons over this information to predict the user’s situation and service use. A key to realizing this framework is the use of an ontology that undergirds the communication and representation, and the use of the cloud to insure the service continuity on heterogeneous mobile devices. Smart city is assumed as the reference scenario

    From Metamodeling to Automatic Generation of Multimodal Interfaces for Ambient Computing

    Get PDF
    International audiencehis paper presents our approach to design multichannel and multimodal applications as part of ambient intelligence. Computers are increasingly present in our environments, whether at work (computers, photocopiers), at home (video player, hi-fi, microwave), in our cars, etc. They are more adaptable and context-sensitive (e.g., the car radio that lowers the volume when the mobile phone rings). Unfortunately, while they should provide smart services by combining their skills, they are not yet designed to communicate together. Our results, mainly based on the use of a software bus and a workflow, show that different devices (such as Wiimote, multi-touch screen, telephone, etc.) can be coordinated in order to activate real things (such as lamp, fan, robot, webcam, etc.). A smart digital home case study illustrates how using our approach to design with ease some parts of the ambient system and to redesign them during runtime

    Ontology Alignment Architecture for Semantic Sensor Web Integration

    Get PDF
    Abstract: Sensor networks are a concept that has become very popular in data acquisition and processing for multiple applications in different fields such as industrial, medicine, home automation, environmental detection, etc. Today, with the proliferation of small communication devices with sensors that collect environmental data, semantic Web technologies are becoming closely related with sensor networks. The linking of elements from Semantic Web technologies with sensor networks has been called Semantic Sensor Web and has among its main features the use of ontologies. One of the key challenges of using ontologies in sensor networks is to provide mechanisms to integrate and exchange knowledge from heterogeneous sources (that is, dealing with semantic heterogeneity). Ontology alignment is the process of bringing ontologies into mutual agreement by the automatic discovery of mappings between related concepts. This paper presents a system for ontology alignment in the Semantic Sensor Web which uses fuzzy logic techniques to combine similarity measures between entities of different ontologies. The proposed approach focuses on two key elements: the terminological similarity, which takes into account the linguistic and semantic information of the context of the entity's names, and the structural similarity, based on both the internal and relational structure of the concepts. This work has been validated using sensor network ontologies and the Ontology Alignment Evaluation Initiative (OAEI) tests. The results show that the proposed techniques outperform previous approaches in terms of precision and recall

    Proceedings of the 1st EICS Workshop on Engineering Interactive Computer Systems with SCXML

    Get PDF

    PRESTK : situation-aware presentation of messages and infotainment content for drivers

    Get PDF
    The amount of in-car information systems has dramatically increased over the last few years. These potentially mutually independent information systems presenting information to the driver increase the risk of driver distraction. In a first step, orchestrating these information systems using techniques from scheduling and presentation planning avoid conflicts when competing for scarce resources such as screen space. In a second step, the cognitive capacity of the driver as another scarce resource has to be considered. For the first step, an algorithm fulfilling the requirements of this situation is presented and evaluated. For the second step, I define the concept of System Situation Awareness (SSA) as an extension of Endsley’s Situation Awareness (SA) model. I claim that not only the driver needs to know what is happening in his environment, but also the system, e.g., the car. In order to achieve SSA, two paths of research have to be followed: (1) Assessment of cognitive load of the driver in an unobtrusive way. I propose to estimate this value using a model based on environmental data. (2) Developing model of cognitive complexity induced by messages presented by the system. Three experiments support the claims I make in my conceptual contribution to this field. A prototypical implementation of the situation-aware presentation management toolkit PRESTK is presented and shown in two demonstrators.In den letzten Jahren hat die Menge der informationsanzeigenden Systeme im Auto drastisch zugenommen. Da sie potenziell unabhängig voneinander ablaufen, erhöhen sie die Gefahr, die Aufmerksamkeit des Fahrers abzulenken. Konflikte entstehen, wenn zwei oder mehr Systeme zeitgleich auf limitierte Ressourcen wie z. B. den Bildschirmplatz zugreifen. Ein erster Schritt, diese Konflikte zu vermeiden, ist die Orchestrierung dieser Systeme mittels Techniken aus dem Bereich Scheduling und Präsentationsplanung. In einem zweiten Schritt sollte die kognitive Kapazität des Fahrers als ebenfalls limitierte Ressource berücksichtigt werden. Der Algorithmus, den ich zu Schritt 1 vorstelle und evaluiere, erfüllt alle diese Anforderungen. Zu Schritt 2 definiere ich das Konzept System Situation Awareness (SSA), basierend auf Endsley’s Konzept der Situation Awareness (SA). Dadurch wird erreicht, dass nicht nur der Fahrer sich seiner Umgebung bewusst ist, sondern auch das System (d.h. das Auto). Zu diesem Zweck m¨ussen zwei Bereiche untersucht werden: (1) Die kognitive Belastbarkeit des Fahrers unaufdringlich ermitteln. Dazu schlage ich ein Modell vor, das auf Umgebungsinformationen basiert. (2) Ein weiteres Modell soll die Komplexität der präsentierten Informationen bestimmen. Drei Experimente stützen die Behauptungen in meinem konzeptuellen Beitrag. Ein Prototyp des situationsbewussten Präsentationsmanagement-Toolkits PresTK wird vorgestellt und in zwei Demonstratoren gezeigt
    corecore