92 research outputs found
Empowering CH experts to produce IoT-enhanced visits
This demo presents EFESTO-5W, a platform for the definition of IoT-enhanced visits to Cultural-Heritage (CH) sites. Its main characteristic is an End-User Development paradigm applied to the IoT technologies and customized for the CH domain, which allows different stakeholders to configure the behavior of smart objects for creating more engaging visit experiences
A Visual Paradigm for Defining Task Automation
In the last years, researchers are devoting many efforts to improve technological aspects of the Internet of Things (IoT), while little attention has dedicated to social and practical sides. Professional developers program the behavior of smart objects. In addition, often the functionality exposed by a single object are not able, alone, to exhaustively support the end users' tasks. The opportunities offered by IoT can be amplified if new highlevel abstractions and interaction paradigms enable also non-technical users to compose the behavior of multiple objects. To fulfill this goal, we present a model to express rules for smart object composition, which includes new operators for defining rules coupling multiple events and conditions exposed by smart objects, and for defining temporal and spatial constraints on rule activation. Such model has been implemented in a Web application whose composition paradigm has been designed during an elicitation study with 25 participants
Empowering CH experts to produce IoT-enhanced visits
This demo presents EFESTO-5W, a platform for the definition of IoT-enhanced visits to Cultural-Heritage (CH) sites. Its main characteristic is an End-User Development paradigm applied to the IoT technologies and customized for the CH domain, which allows different stakeholders to configure the behavior of smart objects for creating more engaging visit experiences
End-user composition of interactive applications through actionable UI components
Developing interactive systems to access and manipulate data is a very tough task. In particular, the development of user interfaces (UIs) is one of the most time-consuming activities in the software lifecycle. This is even more demanding when data have to be retrieved by accessing flexibly different online resources. Indeed, software development is moving more and more toward composite applications that aggregate on the fly specific Web services and APIs. In this article, we present a mashup model that describes the integration, at the presentation layer, of UI components. The goal is to allow non-technical end users to visualize and manipulate (i.e., to perform actions on) the data displayed by the components, which thus become actionable UI components. This article shows how the model has guided the development of a mashup platform through which non-technical end users can create component-based interactive workspaces via the aggregation and manipulation of data fetched from distributed online resources. Due to the abundance of online data sources, facilitating the creation of such interactive workspaces is a very relevant need that emerges in different contexts. A utilization study has been performed in order to assess the benefits of the proposed model and of the Actionable UI Components; participants were required to perform real tasks using the mashup platform. The study results are reported and discussed
A Human–AI interaction paradigm and its application to rhinocytology
This article explores Human-Centered Artificial Intelligence (HCAI) in medical cytology, with a focus on enhancing the interaction with AI. It presents a Human–AI interaction paradigm that emphasizes explainability and user control of AI systems. It is an iterative negotiation process based on three interaction strategies aimed to (i) elaborate the system outcomes through iterative steps (Iterative Exploration), (ii) explain the AI system’s behavior or decisions (Clarification), and (iii) allow non-expert users to trigger simple retraining of the AI model (Reconfiguration). This interaction paradigm is exploited in the redesign of an existing AI-based tool for microscopic analysis of the nasal mucosa. The resulting tool is tested with rhinocytologists. The article discusses the analysis of the results of the conducted evaluation and outlines lessons learned that are relevant for AI in medicine
End-User Development for Artificial Intelligence: A Systematic Literature Review
In recent years, Artificial Intelligence has become more and more relevant in
our society. Creating AI systems is almost always the prerogative of IT and AI
experts. However, users may need to create intelligent solutions tailored to
their specific needs. In this way, AI systems can be enhanced if new approaches
are devised to allow non-technical users to be directly involved in the
definition and personalization of AI technologies. End-User Development (EUD)
can provide a solution to these problems, allowing people to create, customize,
or adapt AI-based systems to their own needs. This paper presents a systematic
literature review that aims to shed the light on the current landscape of EUD
for AI systems, i.e., how users, even without skills in AI and/or programming,
can customize the AI behavior to their needs. This study also discusses the
current challenges of EUD for AI, the potential benefits, and the future
implications of integrating EUD into the overall AI development process.Comment: This version did not undergo peer-review. A corrected version is
published by Springer Nature in the Proceedings of 9th International Syposium
on End-User Development (ISEUD 2023). DOI:
https://doi.org/10.1007/978-3-031-34433-6_
User-defined semantics for the design of IoT systems enabling smart interactive experiences
© The Author(s) 2020. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.Automation in computing systems has always been considered a valuable solution to unburden the user. Internet of Things (IoT) technology best suits automation in different domains, such as home automation, retail, industry, and transportation, to name but a few. While these domains are strongly characterized by implicit user interaction, more recently, automation has been adopted also for the provision of interactive and immersive experiences that actively involve the users. IoT technology thus becomes the key for Smart Interactive Experiences (SIEs), i.e., immersive automated experiences created by orchestrating different devices to enable smart environments to fluidly react to the final users’ behavior. There are domains, e.g., cultural heritage, where these systems and the SIEs can support and provide several benefits. However, experts of such domains, while intrigued by the opportunity to induce SIEs, are facing tough challenges in their everyday work activities when they are required to automate and orchestrate IoT devices without the necessary coding skills. This paper presents a design approach that tries to overcome these difficulties thanks to the adoption of ontologies for defining Event-Condition-Action rules. More specifically, the approach enables domain experts to identify and specify properties of IoT devices through a user-defined semantics that, being closer to the domain experts’ background, facilitates them in automating the IoT devices behavior. We also present a study comparing three different interaction paradigms conceived to support the specification of user-defined semantics through a “transparent” use of ontologies. Based on the results of this study, we work out some lessons learned on how the proposed paradigms help domain experts express their semantics, which in turn facilitates the creation of interactive applications enabling SIEs.Peer reviewedFinal Published versio
Towards the Detection of UX Smells: The Support of Visualizations
Daily experiences in working with various types of computer systems show that, despite the offered functionalities, users have many difficulties, which affect their overall User eXperience (UX). The UX focus is on aesthetics, emotions and social involvement, but usability has a great influence on UX. Usability evaluation is acknowledged as a fundamental activity of the entire development process in software practices. Research in Human-Computer Interaction has proposed methods and tools to support usability evaluation. However, when performing an evaluation study, novice evaluators still have difficulties to identify usability problems and to understand their causes: they would need easier to use and possibly automated tools. This article describes four visualization techniques whose aim is to support the work of evaluators when performing usability tests to evaluate websites. Specifically, they help detect "usability smells", i.e. hints on web pages that might present usability problems, by visualizing the paths followed by the test participants when navigating in a website to perform a test task. A user study with 15 participants compared the four techniques and revealed that the proposed visualizations have the potential to be valuable tools for novice usability evaluators. These first results should push researchers towards the development of further tools that are capable to support the detection of other types of UX smells in the evaluation of computer systems and that can be translated into common industry practices
HeyTAP: Bridging the Gaps Between Users' Needs and Technology in IF-THEN Rules via Conversation
In the Internet of Things era, users are willing to personalize the joint behavior of their connected entities, i.e., smart devices and online service, by means of IF-THEN rules. Unfortunately, how to make such a personalization effective and appreciated is still largely unknown. On the one hand, contemporary platforms to compose IF-THEN rules adopt representation models that strongly depend on the exploited technologies, thus making end-user personalization a complex task. On the other hand, the usage of technology-independent rules envisioned by recent studies opens up new questions, and the identification of available connected entities able to execute abstract users' needs become crucial. To this end, we present HeyTAP, a conversational and semantic-powered trigger-action programming platform able to map abstract users' needs to executable IF-THEN rules. By interacting with a conversational agent, the user communicates her personalization intentions and preferences. User's inputs, along with contextual and semantic information related to the available connected entities, are then used to recommend a set of IF-THEN rules that satisfies the user's needs. An exploratory study on 8 end users preliminary confirms the effectiveness and the appreciation of the approach, and shows that HeyTAP can successfully guide users from their needs to specific rules
- …