737 research outputs found

    Semi-Automated SVG Programming via Direct Manipulation

    Full text link
    Direct manipulation interfaces provide intuitive and interactive features to a broad range of users, but they often exhibit two limitations: the built-in features cannot possibly cover all use cases, and the internal representation of the content is not readily exposed. We believe that if direct manipulation interfaces were to (a) use general-purpose programs as the representation format, and (b) expose those programs to the user, then experts could customize these systems in powerful new ways and non-experts could enjoy some of the benefits of programmable systems. In recent work, we presented a prototype SVG editor called Sketch-n-Sketch that offered a step towards this vision. In that system, the user wrote a program in a general-purpose lambda-calculus to generate a graphic design and could then directly manipulate the output to indirectly change design parameters (i.e. constant literals) in the program in real-time during the manipulation. Unfortunately, the burden of programming the desired relationships rested entirely on the user. In this paper, we design and implement new features for Sketch-n-Sketch that assist in the programming process itself. Like typical direct manipulation systems, our extended Sketch-n-Sketch now provides GUI-based tools for drawing shapes, relating shapes to each other, and grouping shapes together. Unlike typical systems, however, each tool carries out the user's intention by transforming their general-purpose program. This novel, semi-automated programming workflow allows the user to rapidly create high-level, reusable abstractions in the program while at the same time retaining direct manipulation capabilities. In future work, our approach may be extended with more graphic design features or realized for other application domains.Comment: In 29th ACM User Interface Software and Technology Symposium (UIST 2016

    Software support for multitouch interaction: the end-user programming perspective

    Get PDF
    Empowering users with tools for developing multitouch interaction is a promising step toward the materialization of ubiquitous computing. This survey frames the state of the art of existing multitouch software development tools from an end-user programming perspective.This research has been partially funded by the EUFP7 project meSch (grant agreement 600851 and CREAx grant (Spanish Ministry of Economy and Competitivity TIN2014-56534-R

    Model-based engineering of widgets, user applications and servers compliant with ARINC 661 specification

    Get PDF
    International audienceThe purpose of ARINC 661 specification [1] is to define interfaces to a Cockpit Display System (CDS) used in any types of aircraft installations. ARINC 661 provides precise information for communication protocol between application (called User Applications) and user interface components (called widgets) as well as precise information about the widgets themselves. However, in ARINC 661, no information is given about the behaviour of these widgets and about the behaviour of an application made up of a set of such widgets. This paper presents the results of the application of a formal description technique to the various elements of ARINC 661 specification within an industrial project. This formal description technique called Interactive Cooperative Objects defines in a precise and non-ambiguous way all the elements of ARINC 661 specification. The application of the formal description techniques is shown on an interactive application called MPIA (Multi Purpose Interactive Application). Within this application, we present how ICO are used for describing interactive widgets, User Applications and User Interface servers (in charge of interaction techniques). The emphasis is put on the model-based management of the feel of the applications allowing rapid prototyping of the external presentation and the interaction techniques. Lastly, we present the CASE (Computer Aided Software Engineering) tool supporting the formal description technique and its new extensions in order to deal with large scale applications as the ones targeted at by ARINC 661 specification

    Providing end-user facilities to simplify ontology-driven web application authoring

    Full text link
    This is the author’s version of a work that was accepted for publication in Interacting with Computers. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Interacting with Computers, Interacting with Computers 17, 4 (2007) DOI: 10.1016/j.intcom.2007.01.006Generally speaking, emerging web-based technologies are mostly intended for professional developers. They pay poor attention to users who have no programming abilities but need to customize software applications. At some point, such needs force end-users to act as designers in various aspects of software authoring and development. Every day, more new computing-related professionals attempt to create and modify existing applications in order to customize web-based artifacts that will help them carry out their daily tasks. In general they are domain experts rather than skilled software designers, and new authoring mechanisms are needed in order that they can accomplish their tasks properly. The work we present is an effort to supply end-users with easy mechanisms for authoring web-based applications. To complement this effort, we present a user study showing that it is possible to carry out a trade-off between expressiveness and ease of use in order to provide end-users with authoring facilities.The work reported in this paper is being partially supported by the Spanish Ministry of Science and Technology (MCyT), projects TIN2005-06885 and TSI2005-08225-C07-06

    Collaborative explicit plasticity framework: a conceptual scheme for the generation of plastic and group-aware user interfaces

    Get PDF
    The advent of new advances in mobile computing has changed the manner we do our daily work, even enabling us to perform collaborative activities. However, current groupware approaches do not offer an integrating and efficient solution that jointly tackles the flexibility and heterogeneity inherent to mobility as well as the awareness aspects intrinsic to collaborative environments. Issues related to the diversity of contexts of use are collected under the term plasticity. A great amount of tools have emerged offering a solution to some of these issues, although always focused on individual scenarios. We are working on reusing and specializing some already existing plasticity tools to the groupware design. The aim is to offer the benefits from plasticity and awareness jointly, trying to reach a real collaboration and a deeper understanding of multi-environment groupware scenarios. In particular, this paper presents a conceptual framework aimed at being a reference for the generation of plastic User Interfaces for collaborative environments in a systematic and comprehensive way. Starting from a previous conceptual framework for individual environments, inspired on the model-based approach, we introduce specific components and considerations related to groupware

    m-WOnDA:The ”Write Once ‘n’ Deliver Anywhere“ Model for Mobile Users

    Get PDF

    VB2: an architecture for interaction in synthetic worlds

    Get PDF
    This paper describes the VB2 architecture for the construction of three-dimensional interactive applications. The system's state and behavior are uniformly represented as a network of interrelated objects. Dynamic components are modeled by active variables, while multi-way relations are modeled by hierarchical constraints. Daemons are used to sequence between system states in reaction to changes in variable values. The constraint network is efficiently maintained by an incremental constraint solver based on an enhancement of SkyBlue. Multiple devices are used to interact with the synthetic world through the use of various interaction paradigms, including immersive environments with visual and audio feedback. Interaction techniques range from direct manipulation, to gestural input and three-dimensional virtual tools. Adaptive pattern recognition is used to increase input device expressiveness by enhancing sensor data with classification information. Virtual tools, which are encapsulations of visual appearance and behavior, present a selective view of manipulated models' information and offer an interaction metaphor to control it. Since virtual tools are first class objects, they can be assembled into more complex tools, much in the same way that simple tools are built on top of a modeling hierarchy. The architecture is currently being used to build a virtual reality animation system.167-17

    A set of languages for context-aware adaptation

    Get PDF
    The creation of service front ends able to adapt to the context of use involves a wide spectrum of aspects to be considered by developers and designers. A context-aware adaptation enabled application needs a simultaneous management of very different application functionalities, such as the context sensing, identifying different given situations, determining the appropriate reactions and the execution of the adaptation effects. In this paper we describe an adaptation architecture for tackling this complexity and we present a set of languages that address the definition of the various aspects of an adaptive application
    • 

    corecore