928 research outputs found

    Empowering End Users in Debugging Trigger-Action Rules

    Get PDF
    End users can program trigger-action rules to personalize the joint behavior of their smart devices and online services. Trigger-action programming is, however, a complex task for non-programmers and errors made during the composition of rules may lead to unpredictable behaviors and security issues, e.g., a lamp that is continuously fashing or a door that is unexpectedly unlocked. In this paper, we introduce EUDebug, a system that enables end users to debug trigger-action rules. With EUDebug, users compose rules in a web-based application like IFTTT. EUDebug highlights possible problems that the set of all defned rules may generate and allows their step-by-step simulation. Under the hood, a hybrid Semantic Colored Petri Net (SCPN) models, checks, and simulates trigger-action rules and their interactions. An exploratory study on 15 end users shows that EUDebug helps identifying and understanding problems in trigger-action rules, which are not easily discoverable in existing platforms

    My IoT Puzzle: Debugging IF-THEN Rules Through the Jigsaw Metaphor

    Get PDF
    End users can nowadays define applications in the format of IF-THEN rules to personalize their IoT devices and online services. Along with the possibility to compose such applications, however, comes the need to debug them, e.g., to avoid unpredictable and dangerous behaviors. In this context, different questions are still unexplored: which visual languages are more appropriate for debugging IF-THEN rules? Which information do end users need to understand, identify, and correct errors? To answer these questions, we first conducted a literature analysis by reviewing previous works on end-user debugging, with the aim of extracting design guidelines. Then, we developed My IoT Puzzle, a tool to compose and debug IF-THEN rules based on the Jigsaw metaphor. My IoT Puzzle interactively assists users in the debugging process with different real-time feedback, and it allows the resolution of conflicts by providing textual and graphical explanations. An exploratory study with 6 participants preliminary confirms the effectiveness of our approach, showing that the usage of the Jigsaw metaphor, along with real-time feedback and explanations, helps users understand and fix conflicts among IF-THEN rules

    A Debugging Approach for Trigger-Action Programming

    Get PDF
    Nowadays, end users can customize their technological devices and web applications by means of trigger-action rules, defined through End-User Development (EUD) tools. However, debugging capabilities are important missing features in these tools that limit their large-scale adoption. Problems in trigger-action rules, in fact, can lead to unpredictable behaviors and security issues, e.g., a door that is unexpectedly unlocked. In this paper, we present a novel debugging approach for trigger-action programming. The goal is to assists end users during the composition of trigger-action rules by: a) highlighting possible problems that the rules may generate, and b) allowing their step-by-step simulation. The approach, based on Semantic Web and Petri Nets, has been implemented in a EUD tool, and it has been preliminary evaluated in a user study with 6 participants. Results provide evidence that the tool is usable, and it helps users in understanding and identifying problems in trigger-action rules

    Personalizing IoT Ecosystems via Voice

    Get PDF
    2noIntelligent Personal Assistants (IPAs), embedded in smart speakers, allow users to set up some trigger-action rules, via their mobile apps, to personalize the IoT ecosystem in which they are located. Vocal capabilities might be involved in such rules as triggers or actions, but the actual rule composition and execution flow is totally segregated into the mobile app. This position paper reflects on the challenges and opportunities brought by IPAs if they played a more prominent and integrated role in this personalization scenario.openopenDe Russis, Luigi; Monge Roffarello, AlbertoDe Russis, Luigi; Monge Roffarello, Albert

    Towards Vocally-Composed Personalization Rules in the IoT

    Get PDF
    This paper presents a study aimed at understanding whether and how end users would converse with a conversational assistant to personalize their domestic Internet-of-Things ecosystem. The underlying hypothesis is that users are willing to create personalization rules vocally and that conversational assistants could facilitate the composition process, given their knowledge of the IoT ecosystem. The preliminary study was conducted as a semi-structured interview with 7 non-programmers and provided some evidence in support of this hypothesis

    Pika: Empowering Non-Programmers to Author Executable Governance Policies in Online Communities

    Full text link
    Internet users have formed a wide array of online communities with nuanced and diverse community goals and norms. However, most online platforms only offer a limited set of governance models in their software infrastructure and leave little room for customization. Consequently, technical proficiency becomes a prerequisite for online communities to build governance policies in code, excluding non-programmers from participation in designing community governance. In this paper, we present Pika, a system that empowers non-programmers to author a wide range of executable governance policies. At its core, Pika incorporates a declarative language that decomposes governance policies into modular components, thereby facilitating expressive policy authoring through a user-friendly, form-based web interface. Our user studies with 17 participants show that Pika can empower non-programmers to author governance policies approximately 2.5 times faster than programmers who author in code. We also provide insights about Pika's expressivity in supporting diverse policies that online communities want.Comment: Under revie

    User-defined semantics for the design of IoT systems enabling smart interactive experiences

    Get PDF
    © The Author(s) 2020. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.Automation in computing systems has always been considered a valuable solution to unburden the user. Internet of Things (IoT) technology best suits automation in different domains, such as home automation, retail, industry, and transportation, to name but a few. While these domains are strongly characterized by implicit user interaction, more recently, automation has been adopted also for the provision of interactive and immersive experiences that actively involve the users. IoT technology thus becomes the key for Smart Interactive Experiences (SIEs), i.e., immersive automated experiences created by orchestrating different devices to enable smart environments to fluidly react to the final users’ behavior. There are domains, e.g., cultural heritage, where these systems and the SIEs can support and provide several benefits. However, experts of such domains, while intrigued by the opportunity to induce SIEs, are facing tough challenges in their everyday work activities when they are required to automate and orchestrate IoT devices without the necessary coding skills. This paper presents a design approach that tries to overcome these difficulties thanks to the adoption of ontologies for defining Event-Condition-Action rules. More specifically, the approach enables domain experts to identify and specify properties of IoT devices through a user-defined semantics that, being closer to the domain experts’ background, facilitates them in automating the IoT devices behavior. We also present a study comparing three different interaction paradigms conceived to support the specification of user-defined semantics through a “transparent” use of ontologies. Based on the results of this study, we work out some lessons learned on how the proposed paradigms help domain experts express their semantics, which in turn facilitates the creation of interactive applications enabling SIEs.Peer reviewedFinal Published versio

    Defining Configurable Virtual Reality Templates for End Users

    Get PDF
    This paper proposes a solution for supporting end users in configuring Virtual Reality environments by exploiting reusable templates created by experts. We identify the roles participating in the environment development and the means for delegating part of the behaviour definition to the end users. We focus in particular on enabling end users to define the environment behaviour. The solution exploits a taxonomy defining common virtual objects having high-level actions for specifying event-condition-Action rules readable as natural language sentences. End users exploit such actions to define the environment behaviour. We report on a proof-of-concept implementation of the proposed approach, on its validation through two different case studies (virtual shop and museum), and on evaluating the approach with expert users

    XRSpotlight: Example-based Programming of XR Interactions using a Rule-based Approach

    Get PDF
    Research on enabling novice AR/VR developers has emphasized the need to lower the technical barriers to entry. This is often achieved by providing new authoring tools that provide simpler means to implement XR interactions through abstraction. However, novices are then bound by the ceiling of each tool and may not form the correct mental model of how interactions are implemented. We present XRSpotlight, a system that supports novices by curating a list of the XR interactions defined in a Unity scene and presenting them as rules in natural language. Our approach is based on a model abstraction that unifies existing XR toolkit implementations. Using our model, XRSpotlight can find incomplete specifications of interactions, suggest similar interactions, and copy-paste interactions from examples using different toolkits. We assess the validity of our model with professional VR developers and demonstrate that XRSpotlight helps novices understand how XR interactions are implemented in examples and apply this knowledge in their projects
    • …
    corecore