5 research outputs found

    An empirical comparative evaluation of gestUI to include gesture-based interaction in user interfaces

    Full text link
    [EN] Currently there are tools that support the customisation of users' gestures. In general, the inclusion of new gestures implies writing new lines of code that strongly depend on the target platform where the system is run. In order to avoid this platform dependency, gestUI was proposed as a model-driven method that permits (i) the definition of custom touch-based gestures, and (ii) the inclusion of the gesture-based interaction in existing user interfaces on desktop computing platforms. The objective of this work is to compare gestUI (a MDD method to deal with gestures) versus a code-centric method to include gesture-based interaction in user interfaces. In order to perform the comparison, we analyse usability through effectiveness, efficiency and satisfaction. Satisfaction can be measured using the subjects' perceived ease of use, perceived usefulness and intention to use. The experiment was carried out by 21 subjects, who are computer science M.Sc. and Ph.D. students. We use a crossover design, where each subject applied both methods to perform the experiment. Subjects performed tasks related to custom gesture definition and modification of the source code of the user interface to include gesture-based interaction. The data was collected using questionnaires and analysed using non-parametric statistical tests. The results show that gestUI is more efficient and effective. Moreover, results conclude that gestUI is perceived as easier to use than the code-centric method. According to these results, gestUI is a promising method to define custom gestures and to include gesture-based interaction in existing user interfaces of desktop-computing software systems. (C) 2018 Elsevier B.V. All rights reserved.This work has been supported by Department of Computer Science of the Universidad de Cuenca and SENESCYT of Ecuador, and received financial support from the Generalitat Valenciana under "Project IDEO (PROMETEOII/2014/039)" and the Spanish Ministry of Science and Innovation through the "DataMe Project (TIN2016-80811-P)".Parra-González, LO.; España Cubillo, S.; Panach Navarrete, JI.; Pastor López, O. (2019). An empirical comparative evaluation of gestUI to include gesture-based interaction in user interfaces. Science of Computer Programming. 172:232-263. https://doi.org/10.1016/j.scico.2018.12.001S23226317

    Extending domain-specific modeling editors with multi-touch interactions

    Full text link
    L'ingénierie dirigée par les modèles (MDE) est une méthodologie d'ingénierie logiciel qui permet aux ingénieurs de définir des modèles conceptuels pour un domaine spécifique. La MDE est supportée par des outils de modélisation, qui sont des éditeurs pour créer et manipuler des modèles spécifiques au domaine. Cependant, l'état actuel de la pratique de ces éditeurs de modélisation offre des interactions utilisateur très limitées, souvent restreintes à glisser-déposer en utilisant les mouvements de souris et les touches du clavier. Récemment, un nouveau cadre propose de spécifier explicitement les interactions utilisateur des éditeurs de modélisation. Dans cette thèse, nous étendons ce cadre pour supporter les interactions multitouches lors de la modélisation. Nous proposons un catalogue initial de gestes multitouches pour offrir une variété de gestes tactiles utiles. Nous démontrons comment notre approche est applicable pour générer des éditeurs de modélisation. Notre approche permet des interactions plus naturelles pour l'utilisateur quand il effectue des tâches de modélisation types.Model-driven engineering (MDE) is a software engineering methodology that enables engineers to define conceptual models for a specific domain. Modeling is supported by modeling language workbenches, acting as editor to create and manipulate domain-specific models. However, the current state of practice of these modeling editors offers very limited user interactions, often restricted to drag-and-drop with mouse movement and keystrokes. Recently, a novel framework proposes to explicitly specify the user interactions of modeling editors. In this thesis, we extend this framework to support multi-touch interactions when modeling. We propose an initial set of multi-touch gesture catalog to offer a variety of useful touch gestures. We demonstrate how our approach is applicable for generating modeling editors. Our approach yields more natural user interactions to perform typical modeling tasks

    Towards a mobile application to aid law enforcement in diagnosing and preventing mobile bully-victim behaviour in Eastern Free State High Schools of South Africa

    Get PDF
    Mobile bully-victim behaviour is one cyber aggression that is escalating worldwide. Bully-victims are people who bully others but are also victimised by peers. The behaviour of bully-victims therefore swings between that of pure bullies and pure victims, making it difficult to identify and prevent. Prevention measures require the involvement of a number of stakeholders, including communities. However, there has been a lack of whole-community participation in the fight against cyberbullying and the roles of stakeholders are often unclear. We expect the law enforcement in particular, the police, to play a key role in curbing all forms of bullying. This is a challenging task in South Africa as these law enforcement agents often lack the skills and appropriate legislation to address particularly cyber-related bullying. Literature shows that law enforcement agents need to advance their technological skills and also be equipped with digital interventions if they are to diagnose and prevent mobile bully-victim behaviour effectively. This is particularly important in South Africa, where the rate of crime remains one of the highest in the world. The aim of this study was to develop a mobile application that can aid law enforcement in diagnosing and preventing mobile bully-victim behaviour in high schools. As part of requirements to the application development, it identified the impediments to the law enforcement effectiveness in combating mobile bully-victim behaviour. Extensive literature review on the factors influencing mobile bullying and mobile bully-victim behaviour was conducted and an integrative framework for understanding this behaviour and its prevention was developed. In so doing, the dominant behavioural theories were consulted, including the social-ecological theory, social learning theory, social information processing theories, and the theory of planned behaviour, as well as the general strain theory, and the role theory. The conceptual framework developed in this study extended and tailored the “Cyberbullying Continuum of Harm”, enabling inclusive and moderated diagnosis of bullying categories and severity assessment. That is, instead of focusing on mobile bully-victims only, bullies, victims, and those uninvolved were also identified. Also the physical moderation of the identification process by the police helped to minimise dishonest reporting. This framework informed the design, development and evaluation of a mobile application for the law enforcement agents. The Design Science Research (DSR) methodology within pragmatic paradigm and literature guided the development of the mobile application named mobile bullyvictims response system (M-BRS) and its evaluation for utility. The M-BRS features included functions to enable anonymous reporting and confidential assessments of mobile bully-victims effects in school classrooms. Findings from this study confirmed the utility of the M-BRS to identify learners' involvement in mobile bully-victims behaviour through peer nomination and self-nomination. This study also showed that use of the M-BRS has enabled empowerment of marginalised learners, and mitigation of learners' fear to report, providing them with control over mobile bully-victim reporting. In addition, learners using the M-BRS were inclined to report perpetrators through a safe (anonymous and confidential) reporting platform. With the M-BRS, it was much easier to identify categories of bullies, i.e. mobile bully-victims, bullies, victims, and uninvolved. The practical contributions of this study were skills enhancements in reducing the mobile bully-victims behaviour. These included improvement of the police's technical skills to safely identify mobile bully-victims and their characterisation as propagators and retaliators that enabled targeted interventions. This was particularly helpful in response to courts' reluctance to prosecute teenagers for cyberbullying and the South African lack of legislation thereon so that the police are enabled to restoratively address this behaviour in schools. Also, the identification information was helpful to strengthen evidence for reported cases, which was remarkable because sometimes perpetrators cannot be found due to their concealed online identities. Furthermore, this study made possible the surveillance of mobile bully-victims through the M-BRS, which provided the police some control to reducing the mobile bully-victim behaviour. This study provided a practical way for implementing targeted prevention and interventions programmes using relevant resources towards a most efficient solution for mobile bully-victims problem. Since there are not many mobile-based interventions for mobile bully-victim behaviour, this study provided a way in which artefacts' development could be informed by theory, as a new, innovative and practical contribution in research. In so doing, this study contributed to technology applications' ability to modify desired behaviour

    Extending and validating gestUI using technical action research

    No full text

    Extending and validating gestUI using technical action research

    No full text
    gestUI is a model-driven method with tool support to define custom gestures and to include gesture-based interaction in existing user software system interfaces. So far, gestUI had been limited to the definition of the same gesture catalogue for all users of the software system. In this paper, we extend gestUI to permit individual users to define their own custom gesture catalogue and redefine some custom gestures in case of difficulty in using or remembering them. After extending gestUI, we applied technical action research from the FP7 CaaS project's Capability Design Tool with the aim of assessing its acceptance in an industrial setting. We also analysed its perceived ease-of-use and usefulness and gestUI's desirability level and user experience. The study shows that the tool can help improve the definition of custom gestures and the inclusion of gesture-based interaction in user interfaces of software systems.Brighto
    corecore