3,572 research outputs found

    Interfaces of the Agriculture 4.0

    Get PDF
    The introduction of information technologies in the environmental field is impacting and changing even a traditional sector like agriculture. Nevertheless, Agriculture 4.0 and data-driven decisions should meet user needs and expectations. The paper presents a broad theoretical overview, discussing both the strategic role of design applied to Agri-tech and the issue of User Interface and Interaction as enabling tools in the field. In particular, the paper suggests to rethink the HCD approach, moving on a Human-Decentered Design approach that put together user-technology-environment and the importance of the role of calm technologies as a way to place the farmer, not as a final target and passive spectator, but as an active part of the process to aim the process of mitigation, appropriation from a traditional cultivation method to the 4.0 one

    A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    Get PDF
    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion

    Improving Speech Interaction in Vehicles Using Context-Aware Information through A SCXML Framework

    Get PDF
    Speech Technologies can provide important benefits for the development of more usable and safe in-vehicle human-machine interactive systems (HMIs). However mainly due robustness issues, the use of spoken interaction can entail important distractions to the driver. In this challenging scenario, while speech technologies are evolving, further research is necessary to explore how they can be complemented with both other modalities (multimodality) and information from the increasing number of available sensors (context-awareness). The perceived quality of speech technologies can significantly be increased by implementing such policies, which simply try to make the best use of all the available resources; and the in vehicle scenario is an excellent test-bed for this kind of initiatives. In this contribution we propose an event-based HMI design framework which combines context modelling and multimodal interaction using a W3C XML language known as SCXML. SCXML provides a general process control mechanism that is being considered by W3C to improve both voice interaction (VoiceXML) and multimodal interaction (MMI). In our approach we try to anticipate and extend these initiatives presenting a flexible SCXML-based approach for the design of a wide range of multimodal context-aware HMI in-vehicle interfaces. The proposed framework for HMI design and specification has been implemented in an automotive OSGi service platform, and it is being used and tested in the Spanish research project MARTA for the development of several in-vehicle interactive applications

    Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web

    Get PDF
    Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C’s Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers’ observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is soun

    On the Development of Adaptive and User-Centred Interactive Multimodal Interfaces

    Get PDF
    Multimodal systems have attained increased attention in recent years, which has made possible important improvements in the technologies for recognition, processing, and generation of multimodal information. However, there are still many issues related to multimodality which are not clear, for example, the principles that make it possible to resemble human-human multimodal communication. This chapter focuses on some of the most important challenges that researchers have recently envisioned for future multimodal interfaces. It also describes current efforts to develop intelligent, adaptive, proactive, portable and affective multimodal interfaces

    Smart tourist information points by combining agents, semantics and AI techniques

    Get PDF
    The tourism sector in the province of Teruel (Aragon, Spain) is increasing rapidly. Although the number of domestic and foreign tourists is continuously growing, there are some tourist attractions spread over a wide geographical area, which are only visited by a few people at specific times of the year. Additionally, having human tourist guides everywhere and speaking different languages is unfeasible. An integrated solution based on smart and interactive Embodied Conversational Agents (ECAs) tourist guides combined with ontologies would overcome this problem. This paper presents a smart tourist information points approach which gathers tourism information about Teruel, structured according to a novel lightweight ontology built on OWL (Ontology Web Language), known as TITERIA (Touristic Information of TEruel for Intelligent Agents). Our proposal, which combines TITERIA with the Maxine platform, is capable of responding appropriately to the users thanks to its Artificial Intelligence Modeling Language (AIML) database and the AI techniques added to Maxine. Preliminary results indicate that our prototype is able to inform users about interesting topics, as well as to propose other related information, allowing them to acquire a complete information about any issue. Furthermore, users can directly talk with an artificial actor making communication much more natural and closer

    Proceedings of the 2nd EICS Workshop on Engineering Interactive Computer Systems with SCXML

    Get PDF
    • 

    corecore