3,459 research outputs found
Recommended from our members
A multimodal restaurant finder for semantic web
Multimodal dialogue systems provide multiple modalities in the form of speech, mouse clicking, drawing or touch that can enhance human-computer interaction. However, one of the drawbacks of the existing multimodal systems is that they are highly domain-speciïŹc and they do not allow information to be shared across different providers. In this paper, we propose a semantic multimodal system, called Semantic Restaurant Finder, for the Semantic Web in which the restaurant information in different city/country/language are constructed as ontologies to allow the information to be sharable. From the Semantic Restaurant Finder, users can make use of the semantic restaurant knowledge distributed from different locations on the Internet to ïŹnd the desired restaurants
Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web
Current âInternet of Thingsâ concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3Câs Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where driversâ observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is soun
Position paper on realizing smart products: challenges for Semantic Web technologies
In the rapidly developing space of novel technologies that combine sensing and semantic technologies, research on smart products has the potential of establishing a research field in itself. In this paper, we synthesize existing work in this area in order to define and characterize smart products. We then reflect on a set of challenges that semantic technologies are likely to face in this domain. Finally, in order to initiate discussion in the workshop, we sketch an initial comparison of smart products and semantic sensor networks from the perspective of knowledge
technologies
Improving Knowledge Retrieval in Digital Libraries Applying Intelligent Techniques
Nowadays an enormous quantity of heterogeneous and distributed information is stored in the digital University. Exploring online collections to find knowledge relevant to a userâs interests is a challenging work. The artificial intelligence and Semantic Web provide a common framework that allows knowledge to
be shared and reused in an efficient way. In this work we propose a comprehensive approach for discovering E-learning objects in large digital collections based on analysis of recorded semantic metadata in those objects and the application of expert system technologies. We have used Case Based-Reasoning
methodology to develop a prototype for supporting efficient retrieval knowledge from online repositories.
We suggest a conceptual architecture for a semantic search engine. OntoUS is a collaborative effort that
proposes a new form of interaction between users and digital libraries, where the latter are adapted to users
and their surroundings
Vocabularies for description of accessibility issues in multimodal user interfaces
In previous work, we proposed a unified approach for describing multimodal human-computer interaction and interaction constraints in terms of sensual, motor, perceptual and cognitive functions of users. In this paper, we extend this work by providing formalised vocabularies that express human functionalities and anatomical structures required by specific modalities. The central theme of our approach is to connect these modality representations with descriptions of user, device and environmental constraints that influence the interaction. These descriptions can then be used in a reasoning framework that will exploit formal connections among interaction modalities and constraints. The focus of this paper is on specifying a comprehensive vocabulary of necessary concepts. Within the context of an interaction framework, we describe a number of examples that use this formalised knowledge
Using SCXML to integrate semantic sensor information into context-aware user interfaces
This paper describes a novel architecture to introduce automatic annotation and processing of semantic sensor data within context-aware applications. Based on the well-known state-charts technologies, and represented using W3C SCXML language combined with Semantic Web technologies, our architecture is able to provide enriched higher-level semantic representations of userâs context. This capability to detect and model relevant user situations allows a seamless modeling of the actual interaction situation, which can be integrated during the design of multimodal user interfaces (also based on SCXML) for them to be adequately adapted. Therefore, the final result of this contribution can be described as a flexible context-aware SCXML-based architecture, suitable for both designing a wide range of multimodal context-aware user interfaces, and implementing the automatic enrichment of sensor data, making it available to the entire Semantic Sensor We
The AISBâ08 Symposium on Multimodal Output Generation (MOG 2008)
Welcome to Aberdeen at the Symposium on Multimodal Output Generation (MOG 2008)! In this volume the papers presented at the MOG 2008 international symposium are collected
- âŠ