2,648 research outputs found

    Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web

    Get PDF
    Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C’s Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers’ observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is soun

    Combining heterogeneous inputs for the development of adaptive and multimodal interaction systems

    Get PDF
    In this paper we present a novel framework for the integration of visual sensor networks and speech-based interfaces. Our proposal follows the standard reference architecture in fusion systems (JDL), and combines different techniques related to Artificial Intelligence, Natural Language Processing and User Modeling to provide an enhanced interaction with their users. Firstly, the framework integrates a Cooperative Surveillance Multi-Agent System (CS-MAS), which includes several types of autonomous agents working in a coalition to track and make inferences on the positions of the targets. Secondly, enhanced conversational agents facilitate human-computer interaction by means of speech interaction. Thirdly, a statistical methodology allows modeling the user conversational behavior, which is learned from an initial corpus and improved with the knowledge acquired from the successive interactions. A technique is proposed to facilitate the multimodal fusion of these information sources and consider the result for the decision of the next system action.This work was supported in part by Projects MEyC TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS S2009/TIC-1485Publicad

    On the Development of Adaptive and User-Centred Interactive Multimodal Interfaces

    Get PDF
    Multimodal systems have attained increased attention in recent years, which has made possible important improvements in the technologies for recognition, processing, and generation of multimodal information. However, there are still many issues related to multimodality which are not clear, for example, the principles that make it possible to resemble human-human multimodal communication. This chapter focuses on some of the most important challenges that researchers have recently envisioned for future multimodal interfaces. It also describes current efforts to develop intelligent, adaptive, proactive, portable and affective multimodal interfaces

    Entertaining and Opinionated but Too Controlling: A Large-Scale User Study of an Open Domain Alexa Prize System

    Full text link
    Conversational systems typically focus on functional tasks such as scheduling appointments or creating todo lists. Instead we design and evaluate SlugBot (SB), one of 8 semifinalists in the 2018 AlexaPrize, whose goal is to support casual open-domain social inter-action. This novel application requires both broad topic coverage and engaging interactive skills. We developed a new technical approach to meet this demanding situation by crowd-sourcing novel content and introducing playful conversational strategies based on storytelling and games. We collected over 10,000 conversations during August 2018 as part of the Alexa Prize competition. We also conducted an in-lab follow-up qualitative evaluation. Over-all users found SB moderately engaging; conversations averaged 3.6 minutes and involved 26 user turns. However, users reacted very differently to different conversation subtypes. Storytelling and games were evaluated positively; these were seen as entertaining with predictable interactive structure. They also led users to impute personality and intelligence to SB. In contrast, search and general Chit-Chat induced coverage problems; here users found it hard to infer what topics SB could understand, with these conversations seen as being too system-driven. Theoretical and design implications suggest a move away from conversational systems that simply provide factual information. Future systems should be designed to have their own opinions with personal stories to share, and SB provides an example of how we might achieve this.Comment: To appear in 1st International Conference on Conversational User Interfaces (CUI 2019

    Towards Integration of Cognitive Models in Dialogue Management: Designing the Virtual Negotiation Coach Application

    Get PDF
    This paper presents an approach to flexible and adaptive dialogue management driven by cognitive modelling of human dialogue behaviour. Artificial intelligent agents, based on the ACT-R cognitive architecture, together with human actors are participating in a (meta)cognitive skills training within a negotiation scenario. The agent  employs instance-based learning to decide about its own actions and to reflect on the behaviour of the opponent. We show that task-related actions can be handled by a cognitive agent who is a plausible dialogue partner.  Separating task-related and dialogue control actions enables the application of sophisticated models along with a flexible architecture  in which  various alternative modelling methods can be combined. We evaluated the proposed approach with users assessing  the relative contribution of various factors to the overall usability of a dialogue system. Subjective perception of effectiveness, efficiency and satisfaction were correlated with various objective performance metrics, e.g. number of (in)appropriate system responses, recovery strategies, and interaction pace. It was observed that the dialogue system usability is determined most by the quality of agreements reached in terms of estimated Pareto optimality, by the user's negotiation strategies selected, and by the quality of system recognition, interpretation and responses. We compared human-human and human-agent performance with respect to the number and quality of agreements reached, estimated cooperativeness level, and frequency of accepted negative outcomes. Evaluation experiments showed promising, consistently positive results throughout the range of the relevant scales

    From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI

    Full text link
    This paper gives an overview of the ten-year devel- opment of the papers presented at the International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutoUI) from 2009 to 2018. We categorize the topics into two main groups, namely, manual driving-related research and automated driving-related re- search. Within manual driving, we mainly focus on studies on user interfaces (UIs), driver states, augmented reality and head-up displays, and methodology; Within automated driv- ing, we discuss topics, such as takeover, acceptance and trust, interacting with road users, UIs, and methodology. We also discuss the main challenges and future directions for AutoUI and offer a roadmap for the research in this area.https://deepblue.lib.umich.edu/bitstream/2027.42/153959/1/From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI.pdfDescription of From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI.pdf : Main articl

    Using a Research Domain Ontology as a driver for Technology Commercialization

    Get PDF
    O conceito Operator 4.0 desempenha um papel fundamental no tipo de indĂșstria em que nos encontramos hoje, a IndĂșstria 4.0. Durante a revisĂŁo da literatura, tornou-se evidente que nĂŁo existia um modelo de referĂȘncia para apoiar o desenvolvimento de conceitos inovadores para o Operador 4.0. Por conseguinte, esta investigação centrar-se-ĂĄ no seu desenvolvimento, em parceria com o Fraunhofer Portugal Research Center for Assistive Information and Communication Solutions - Fraunhofer AICOS. Como resultado, foi criada uma ontologia, e utilizada a abordagem de Design Science Approach para auxiliar no seu desenvolvimento, seguida de uma primeira validação por especialistas da Fraunhofer Portugal. Posteriormente, foi realizada uma sessĂŁo de Focus Group, tambĂ©m com especialistas da Fraunhofer Portugal, que participaram numa segunda e Ășltima validação da ontologia, bem como na avaliação das questĂ”es de competĂȘncia. Este estudo contribuiu para uma melhor compreensĂŁo de como a organização do conhecimento (Frishammar, Lichtenthaler, & Rundquist, 2012) num determinado domĂ­nio tecnolĂłgico pode ajudar na tomada de decisĂ”es quando Ă© proposto um novo projeto de investigação que pode resultar em propriedade intelectual futura. Esta propriedade intelectual seria licenciada ou explorada de alguma forma no futuro. ApĂłs a validação da ontologia, foi realizado um workshop para demonstrar a segunda contribuição desta dissertação, uma proposta sobre como utilizar a ontologia como motor para iniciar o processo tecnolĂłgico no contexto da identificação de oportunidades de comercialização futura da tecnologia. No final, este estudo respondeu Ă  pergunta de investigação colocada e Ă s questĂ”es de competĂȘncia relacionadas. Consequentemente, pode-se dizer que com esta investigação, foi eficazmente desenvolvido um modelo de referĂȘncia para apoiar a construção de soluçÔes Operador 4.0 para a indĂșstria.The Operator 4.0 concept plays a key role in the kind of industry we find ourselves in today, Industry 4.0. In the course of the literature review, it became evident that there was an absence of a reference model to support the development of innovative concepts for Operator 4.0. Therefore, this research will focus on its development, in partnership with Fraunhofer Portugal Research Center for Assistive Information and Communication Solutions - Fraunhofer AICOS. As a result, an ontology was created, and Design Science Approach used to help its development, followed by a first validation by Fraunhofer Portugal experts. A Focus Group session was held with experts from Fraunhofer Portugal, who participated in the validation of the ontology as well as the evaluation of the competence questions, as a final validation. This study contributed to a better understanding of how knowledge organization (Frishammar, Lichtenthaler, & Rundquist, 2012) in a given technological domain might assist in decision making when a new research project is proposed that may result in future intellectual property. This intellectual property would be licensed or exploited in some way in the future. Following the ontology validation, a workshop was held to demonstrate the second contribution of this dissertation, a proposal on how to use the ontology as a driver to start the technology process in the context of identifying opportunities for future commercialization of the technology. In the end, this study answered the research question and related competence questions. Therefore, it can be said that with this research, a reference model has been effectively developed to support the construction of Operator 4.0 solutions for industry

    KIDE4I: A Generic Semantics-Based Task-Oriented Dialogue System for Human-Machine Interaction in Industry 5.0

    Get PDF
    In Industry 5.0, human workers and their wellbeing are placed at the centre of the production process. In this context, task-oriented dialogue systems allow workers to delegate simple tasks to industrial assets while working on other, more complex ones. The possibility of naturally interacting with these systems reduces the cognitive demand to use them and triggers acceptation. Most modern solutions, however, do not allow a natural communication, and modern techniques to obtain such systems require large amounts of data to be trained, which is scarce in these scenarios. To overcome these challenges, this paper presents KIDE4I (Knowledge-drIven Dialogue framEwork for Industry), a semantic-based task-oriented dialogue system framework for industry that allows workers to naturally interact with industrial systems, is easy to adapt to new scenarios and does not require great amounts of data to be constructed. This work also reports the process to adapt KIDE4I to new scenarios. To validate and evaluate KIDE4I, it has been adapted to four use cases that are relevant to industrial scenarios following the described methodology, and two of them have been evaluated through two user studies. The system has been considered as accurate, useful, efficient, not demanding cognitively, flexible and fast. Furthermore, subjects view the system as a tool to improve their productivity and security while carrying out their tasks.This research was partially funded by the Basque Government’s Elkartek research and innovation program, projects EKIN (grant no KK-2020/00055) and DeepText (grant no KK-2020/00088)
    • 

    corecore