1,212 research outputs found

    An empirical comparative evaluation of gestUI to include gesture-based interaction in user interfaces

    Full text link
    [EN] Currently there are tools that support the customisation of users' gestures. In general, the inclusion of new gestures implies writing new lines of code that strongly depend on the target platform where the system is run. In order to avoid this platform dependency, gestUI was proposed as a model-driven method that permits (i) the definition of custom touch-based gestures, and (ii) the inclusion of the gesture-based interaction in existing user interfaces on desktop computing platforms. The objective of this work is to compare gestUI (a MDD method to deal with gestures) versus a code-centric method to include gesture-based interaction in user interfaces. In order to perform the comparison, we analyse usability through effectiveness, efficiency and satisfaction. Satisfaction can be measured using the subjects' perceived ease of use, perceived usefulness and intention to use. The experiment was carried out by 21 subjects, who are computer science M.Sc. and Ph.D. students. We use a crossover design, where each subject applied both methods to perform the experiment. Subjects performed tasks related to custom gesture definition and modification of the source code of the user interface to include gesture-based interaction. The data was collected using questionnaires and analysed using non-parametric statistical tests. The results show that gestUI is more efficient and effective. Moreover, results conclude that gestUI is perceived as easier to use than the code-centric method. According to these results, gestUI is a promising method to define custom gestures and to include gesture-based interaction in existing user interfaces of desktop-computing software systems. (C) 2018 Elsevier B.V. All rights reserved.This work has been supported by Department of Computer Science of the Universidad de Cuenca and SENESCYT of Ecuador, and received financial support from the Generalitat Valenciana under "Project IDEO (PROMETEOII/2014/039)" and the Spanish Ministry of Science and Innovation through the "DataMe Project (TIN2016-80811-P)".Parra-González, LO.; España Cubillo, S.; Panach Navarrete, JI.; Pastor López, O. (2019). An empirical comparative evaluation of gestUI to include gesture-based interaction in user interfaces. Science of Computer Programming. 172:232-263. https://doi.org/10.1016/j.scico.2018.12.001S23226317

    PhyDSLK: a model-driven framework for generating exergames

    Get PDF
    AbstractIn recent years, we have been witnessing a rapid increase of research on exergames—i.e., computer games that require users to move during gameplay as a form of physical activity and rehabilitation. Properly balancing the need to develop an effective exercise activity with the requirements for a smooth interaction with the software system and an engaging game experience is a challenge. Model-driven software engineering enables the fast prototyping of multiple system variants, which can be very useful for exergame development. In this paper, we propose a framework, PhyDSLK, which eases the development process of personalized and engaging Kinect-based exergames for rehabilitation purposes, providing high-level tools that abstract the technical details of using the Kinect sensor and allows developers to focus on the game design and user experience. The system relies on model-driven software engineering technologies and is made of two main components: (i) an authoring environment relying on a domain-specific language to define the exergame model encapsulating the gameplay that the exergame designer has envisioned and (ii) a code generator that transforms the exergame model into executable code. To validate our approach, we performed a preliminary empirical evaluation addressing development effort and usability of the PhyDSLK framework. The results are promising and provide evidence that people with no experience in game development are able to create exergames with different complexity levels in one hour, after a less-than-two-hour training on PhyDSLK. Also, they consider PhyDSLK usable regardless of the exergame complexity

    Image Retrieval within Augmented Reality

    Get PDF
    Die vorliegende Arbeit untersucht das Potenzial von Augmented Reality zur Verbesserung von Image Retrieval Prozessen. Herausforderungen in Design und Gebrauchstauglichkeit wurden für beide Forschungsbereiche dargelegt und genutzt, um Designziele für Konzepte zu entwerfen. Eine Taxonomie für Image Retrieval in Augmented Reality wurde basierend auf der Forschungsarbeit entworfen und eingesetzt, um verwandte Arbeiten und generelle Ideen für Interaktionsmöglichkeiten zu strukturieren. Basierend auf der Taxonomie wurden Anwendungsszenarien als weitere Anforderungen für Konzepte formuliert. Mit Hilfe der generellen Ideen und Anforderungen wurden zwei umfassende Konzepte für Image Retrieval in Augmented Reality ausgearbeitet. Eins der Konzepte wurde auf einer Microsoft HoloLens umgesetzt und in einer Nutzerstudie evaluiert. Die Studie zeigt, dass das Konzept grundsätzlich positiv aufgenommen wurde und bietet Erkenntnisse über unterschiedliches Verhalten im Raum und verschiedene Suchstrategien bei der Durchführung von Image Retrieval in der erweiterten Realität.:1 Introduction 1.1 Motivation and Problem Statement 1.1.1 Augmented Reality and Head-Mounted Displays 1.1.2 Image Retrieval 1.1.3 Image Retrieval within Augmented Reality 1.2 Thesis Structure 2 Foundations of Image Retrieval and Augmented Reality 2.1 Foundations of Image Retrieval 2.1.1 Definition of Image Retrieval 2.1.2 Classification of Image Retrieval Systems 2.1.3 Design and Usability in Image Retrieval 2.2 Foundations of Augmented Reality 2.2.1 Definition of Augmented Reality 2.2.2 Augmented Reality Design and Usability 2.3 Taxonomy for Image Retrieval within Augmented Reality 2.3.1 Session Parameters 2.3.2 Interaction Process 2.3.3 Summary of the Taxonomy 3 Concepts for Image Retrieval within Augmented Reality 3.1 Related Work 3.1.1 Natural Query Specification 3.1.2 Situated Result Visualization 3.1.3 3D Result Interaction 3.1.4 Summary of Related Work 3.2 Basic Interaction Concepts for Image Retrieval in Augmented Reality 3.2.1 Natural Query Specification 3.2.2 Situated Result Visualization 3.2.3 3D Result Interaction 3.3 Requirements for Comprehensive Concepts 3.3.1 Design Goals 3.3.2 Application Scenarios 3.4 Comprehensive Concepts 3.4.1 Tangible Query Workbench 3.4.2 Situated Photograph Queries 3.4.3 Conformance of Concept Requirements 4 Prototypic Implementation of Situated Photograph Queries 4.1 Implementation Design 4.1.1 Implementation Process 4.1.2 Structure of the Implementation 4.2 Developer and User Manual 4.2.1 Setup of the Prototype 4.2.2 Usage of the Prototype 4.3 Discussion of the Prototype 5 Evaluation of Prototype and Concept by User Study 5.1 Design of the User Study 5.1.1 Usability Testing 5.1.2 Questionnaire 5.2 Results 5.2.1 Logging of User Behavior 5.2.2 Rating through Likert Scales 5.2.3 Free Text Answers and Remarks during the Study 5.2.4 Observations during the Study 5.2.5 Discussion of Results 6 Conclusion 6.1 Summary of the Present Work 6.2 Outlook on Further WorkThe present work investigates the potential of augmented reality for improving the image retrieval process. Design and usability challenges were identified for both fields of research in order to formulate design goals for the development of concepts. A taxonomy for image retrieval within augmented reality was elaborated based on research work and used to structure related work and basic ideas for interaction. Based on the taxonomy, application scenarios were formulated as further requirements for concepts. Using the basic interaction ideas and the requirements, two comprehensive concepts for image retrieval within augmented reality were elaborated. One of the concepts was implemented using a Microsoft HoloLens and evaluated in a user study. The study showed that the concept was rated generally positive by the users and provided insight in different spatial behavior and search strategies when practicing image retrieval in augmented reality.:1 Introduction 1.1 Motivation and Problem Statement 1.1.1 Augmented Reality and Head-Mounted Displays 1.1.2 Image Retrieval 1.1.3 Image Retrieval within Augmented Reality 1.2 Thesis Structure 2 Foundations of Image Retrieval and Augmented Reality 2.1 Foundations of Image Retrieval 2.1.1 Definition of Image Retrieval 2.1.2 Classification of Image Retrieval Systems 2.1.3 Design and Usability in Image Retrieval 2.2 Foundations of Augmented Reality 2.2.1 Definition of Augmented Reality 2.2.2 Augmented Reality Design and Usability 2.3 Taxonomy for Image Retrieval within Augmented Reality 2.3.1 Session Parameters 2.3.2 Interaction Process 2.3.3 Summary of the Taxonomy 3 Concepts for Image Retrieval within Augmented Reality 3.1 Related Work 3.1.1 Natural Query Specification 3.1.2 Situated Result Visualization 3.1.3 3D Result Interaction 3.1.4 Summary of Related Work 3.2 Basic Interaction Concepts for Image Retrieval in Augmented Reality 3.2.1 Natural Query Specification 3.2.2 Situated Result Visualization 3.2.3 3D Result Interaction 3.3 Requirements for Comprehensive Concepts 3.3.1 Design Goals 3.3.2 Application Scenarios 3.4 Comprehensive Concepts 3.4.1 Tangible Query Workbench 3.4.2 Situated Photograph Queries 3.4.3 Conformance of Concept Requirements 4 Prototypic Implementation of Situated Photograph Queries 4.1 Implementation Design 4.1.1 Implementation Process 4.1.2 Structure of the Implementation 4.2 Developer and User Manual 4.2.1 Setup of the Prototype 4.2.2 Usage of the Prototype 4.3 Discussion of the Prototype 5 Evaluation of Prototype and Concept by User Study 5.1 Design of the User Study 5.1.1 Usability Testing 5.1.2 Questionnaire 5.2 Results 5.2.1 Logging of User Behavior 5.2.2 Rating through Likert Scales 5.2.3 Free Text Answers and Remarks during the Study 5.2.4 Observations during the Study 5.2.5 Discussion of Results 6 Conclusion 6.1 Summary of the Present Work 6.2 Outlook on Further Wor

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache

    Child programming: an adequate domain specific language for programming specific robots

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia InformáticaDue to the limited existence of dedicated robot programming solutions for children (as well as scientific studies), this work presents the design and implementation of a visual domain specific language (DSL), using the Model-Driven Development approach(MDD), for programming robotics and automaton systems with the goal to increase productivity and simplify the software development process. The target audience for this DSL is mostly children with ages starting from 8 years old. Our work implied to use the typical Software Language Engineering life cycle, starting by an elaborate study of the user’s profile, based on work in cognitive sciences, and a Domain analysis. Several visual design paradigms were considered during the design phase of our DSL, and we have focused our studies on the Behavior Trees paradigm, a paradigm intensively used in the gaming industry. Intuitive, simplicity and a small learning curve were the three main concerns considered during the design and development phases. To help validating the DSL and the proposed approach, we used a concrete robotic product for children built with the Open Source Arduino platform as target domain. The last part of this work was dedicated to study the adequacy of the language design choices, compared to other solutions (including commercial technologies), to the target users with different ages and different cognitive-development stages. We have also studied the benefits of the chosen paradigm to domain experts’ proficient on robot programming in different paradigms to determine the possibility to generalize the solution to different user profiles

    Not All Gestures Are Created Equal: Gesture and Visual Feedback in Interaction Spaces.

    Full text link
    As multi-touch mobile computing devices and open-air gesture sensing technology become increasingly commoditized and affordable, they are also becoming more widely adopted. It became necessary to create new interaction design specifically for gesture-based interfaces to meet the growing needs of users. However, a deeper understanding of the interplay between gesture, and visual and sonic output is needed to make meaningful advances in design. This thesis addresses this crucial step in development by investigating the interrelation between gesture-based input, and visual representation and feedback, in gesture-driven creative computing. This thesis underscores the importance that not all gestures are created equal, and there are multiple factors that affect their performance. For example, a drag gesture in visual programming scenario performs differently than in a target acquisition task. The work presented here (i) examines the role of visual representation and mapping in gesture input, (ii) quantifies user performance differences in gesture input to examine the effect of multiple factors on gesture interactions, and (iii) develops tools and platforms for exploring visual representations of gestures. A range of gesture spaces and scenarios, from continuous sound control with open-air gestures to mobile visual programming with discrete gesture-driven commands, was assessed. Findings from this thesis reveals a rich space of complex interrelations between gesture input and visual feedback and representations. The contributions of this thesis also includes the development of an augmented musical keyboard with 3-D continuous gesture input and projected visualization, as well as a touch-driven visual programming environment for interactively constructing dynamic interfaces. These designs were evaluated by a series of user studies in which gesture-to-sound mapping was found to have a significant affect on user performance, along with other factors such as the selection of visual representation and device size. A number of counter-intuitive findings point to the potentially complex interactions between factors such as device size, task and scenarios, which exposes the need for further research. For example, the size of the device was found to have contradictory effects in two different scenarios. Furthermore, this work presents a multi-touch gestural environment to support the prototyping of gesture interactions.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113456/1/yangqi_1.pd

    Source Code Interaction on Touchscreens

    Get PDF
    Direct interaction with touchscreens has become a primary way of using a device. This work seeks to devise interaction methods for editing textual source code on touch-enabled devices. With the advent of the “Post-PC Era”, touch-centric interaction has received considerable attention in both research and development. However, various limitations have impeded widespread adoption of programming environments on modern platforms. Previous attempts have mainly been successful by simplifying or constraining conventional programming but have only insufficiently supported source code written in mainstream programming languages. This work includes the design, development, and evaluation of techniques for editing, selecting, and creating source code on touchscreens. The results contribute to text editing and entry methods by taking the syntax and structure of programming languages into account while exploiting the advantages of gesture-driven control. Furthermore, this work presents the design and software architecture of a mobile development environment incorporating touch-enabled modules for typical software development tasks

    An investigation into alternative human-computer interaction in relation to ergonomics for gesture interface design

    Get PDF
    Recent, innovative developments in the field of gesture interfaces as input techniques have the potential to provide a basic, lower-cost, point-and-click function for graphic user interfaces (GUIs). Since these gesture interfaces are not yet widely used, indeed no tilt-based gesture interface is currently on the market, there is neither an international standard for the testing procedure nor a guideline for their ergonomic design and development. Hence, the research area demands more design case studies on a practical basis. The purpose of the research is to investigate the design factors of gesture interfaces for the point-andclick task in the desktop computer environment. The key function of gesture interfaces is to transfer the specific body movement into the cursor movement on the two-dimensional graphical user interface(2D GUI) on a real-time basis, based in particular on the arm movement. The initial literature review identified limitations related to the cursor movement behaviour with gesture interfaces. Since the cursor movement is the machine output of the gesture interfaces that need to be designed, a new accuracy measure based on the calculation of the cursor movement distance and an associated model was then proposed in order to validate the continuous cursor movement. Furthermore, a design guideline with detailed design requirements and specifications for the tilt-based gesture interfaces was suggested. In order to collect the human performance data and the cursor movement distance, a graphical measurement platform was designed and validated with the ordinary mouse. Since there are typically two types of gesture interface, i.e. the sweep-based and the tilt-based, and no commercial tilt-based gesture interface has yet been developed, a commercial sweep-based gesture interface, namely the P5 Glove, was studied and the causes and effects of the discrete cursor movement on the usability was investigated. According to the proposed design guideline, two versions of the tilt-based gesture 3 interface were designed and validated based on an iterative design process. Most of the phenomena and results from the trials undertaken, which are inter-related, were analyzed and discussed. The research has contributed new knowledge through design improvement of tilt-based gesture interfaces and the improvement of the discrete cursor movement by elimination of the manual error compensation. This research reveals that there is a relation between the cursor movement behaviour and the adjusted R 2 for the prediction of the movement time across models expanded from Fitts’ Law. In such a situation, the actual working area and the joint ranges are lengthy and appreciably different from those that had been planned. Further studies are suggested. The research was associated with the University Alliance Scheme technically supported by Freescale Semiconductor Co., U.S
    corecore