792 research outputs found

    A Comparison of Paper Sketch and Interactive Wireframe by Eye Movements Analysis, Survey, and Interview

    Get PDF
    Eye movement-based analyses have been extensively performed on graphical user interface designs, mainly on high-fidelity prototypes such as coded prototypes. However, practitioners usually initiate the development life cycle with low-fidelity prototypes, such as mock-ups or sketches. Since little or no eye movement analysis has been performed on the latter, would eye tracking transpose its benefits from high- to low-fidelity prototypes and produce different results? To bridge this gap, we performed an eye movement-based analysis that compares gaze point indexes, gaze event types and durations, fixation, and saccade indexes produced by N=8N{=}8 participants between two treatments, a paper prototype vs. a wireframe. The paper also reports a qualitative analysis based on the answers provided by these participants in a semi-directed interview and on a perceived usability questionnaire with 14 items. Due to its interactivity, the wireframe seems to foster a more exploratory approach to design (e.g., testing and navigating more extensively) than the paper prototype

    Measuring User Experience of Adaptive User Interfaces using EEG: A Replication Study

    Full text link
    Adaptive user interfaces have the advantage of being able to dynamically change their aspect and/or behaviour depending on the characteristics of the context of use, i.e. to improve user experience(UX). UX is an important quality factor that has been primarily evaluated with classical measures but to a lesser extent with physiological measures, such as emotion recognition, skin response, or brain activity.In a previous exploratory experiment involving users with different profiles and a wide range of ages, we analysed user experience in terms of cognitive load, engagement, attraction and memorisation when employing twenty graphical adaptive menus through the use of an Electroencephalogram (EEG) device. The results indicated that there were statistically significant differences for these four variables. However, we considered that it was necessary to confirm or reject these findings using a more homogeneous group of users.We conducted an operational internal replication study with 40 participants. We also investigated the potential correlation between EEG signals and the participants' user experience ratings, such as their preferences.The results of this experiment confirm that there are statistically significant differences between the EEG variables when the participants interact with the different adaptive menus. Moreover, there is a high correlation among the participants' UX ratings and the EEG signals, and a trend regarding performance has emerged from our analysis.These findings suggest that EEG signals could be used to evaluate UX. With regard to the menus studied, our results suggest that graphical menus with different structures and font types produce more differences in users' brain responses, while menus which use colours produce more similarities in users' brain responses. Several insights with which to improve users' experience of graphical adaptive menus are outlined.Comment: 10 pages, 4 figures, 2 tables, 34 references, International Conference on Evaluation and Assessment in Software Engineering (EASE '23

    Brave New GES World:A Systematic Literature Review of Gestures and Referents in Gesture Elicitation Studies

    Get PDF
    How to determine highly effective and intuitive gesture sets for interactive systems tailored to end users’ preferences? A substantial body of knowledge is available on this topic, among which gesture elicitation studies stand out distinctively. In these studies, end users are invited to propose gestures for specific referents, which are the functions to control for an interactive system. The vast majority of gesture elicitation studies conclude with a consensus gesture set identified following a process of consensus or agreement analysis. However, the information about specific gesture sets determined for specific applications is scattered across a wide landscape of disconnected scientific publications, which poses challenges to researchers and practitioners to effectively harness this body of knowledge. To address this challenge, we conducted a systematic literature review and examined a corpus of N=267 studies encompassing a total of 187, 265 gestures elicited from 6, 659 participants for 4, 106 referents. To understand similarities in users’ gesture preferences within this extensive dataset, we analyzed a sample of 2, 304 gestures extracted from the studies identified in our literature review. Our approach consisted of (i) identifying the context of use represented by end users, devices, platforms, and gesture sensing technology, (ii) categorizing the referents, (iii) classifying the gestures elicited for those referents, and (iv) cataloging the gestures based on their representation and implementation modalities. Drawing from the findings of this review, we propose guidelines for conducting future end-user gesture elicitation studies

    Extending the Interaction Flow Modeling Language (IFML) for Model Driven Development of Mobile Applications Front End

    Get PDF
    International audienceFront-end design of mobile applications is a complex and multidisciplinary task, where many perspectives intersect and the user experience must be perfectly tailored to the application objectives. However, development of mobile user interactions is still largely a manual task, which yields to high risks of errors, inconsistencies and ine ciencies. In this paper we propose a model-driven approach to mobile application development based on the IFML standard. We propose an extension of the Interaction Flow Modeling Language tailored to mobile applications and we describe our implementation experience that comprises the development of automatic code generators for cross-platform mobile applications based on HTML5, CSS and JavaScript optimized for the Apache Cordova framework. We show the approach at work on a popular mobile application, we report on the application of the approach on an industrial application development project and we provide a productivity comparison with traditional approaches

    Study of the morphology of semicrystalline poly(ethylene terephthalate) by hydrolysis etching

    Get PDF
    Semicrystalline poly(ethylene terephthalate) was hydrolysed in water at 180°C under elevated pressure and subsequently treated with ethanol, following the etching process first developed by Miyagi and Wunderlich. The weight loss, the wide-angle X-ray scattering and the molecular weight were measured as a function of etching time. It was found that even at the end of the etching process not all the amorphous material could be removed by the hydrolysis treatment. By comparing the obtained results with those derived from an elaborate small-angle X-ray scattering study and with wide-angle X-ray scattering measurements, it was concluded that only those amorphous regions lying outside of the lamellar stacks were removed. Subsequently, the lamellar stacks themselves were attacked. It was also found that at the very beginning of the hydrolysis process additional crystals were formed in the material.Peer reviewe

    A Unified Model for Context-aware Adaptation of User Interfaces

    Get PDF
    The variety of contexts of use in which the interaction takes place nowadays is a challenge for both stakeholders for the development and end users for the interaction. Stakeholders either ignore the exponential amount of information coming from the context of use, adopting the inaccessible approach of one-size-fits-all, or they must dedicate a significant effort to carefully consider context differences while designing several different versions of the same user interface. For end users, web pages that are not adapted become often inaccessible in non-conventional contexts of use, with mobile devices, as smart phones and tablet PCs. In order to leverage such efforts, we propose in this paper a meta-model that by means of a unified view supports all phases of the implementation of context-aware adaptation for user interfaces. With such a formal abstraction of an interactive system, stakeholders can generate different instantiations with more concrete UI’s that can properly handle and adapt accordingly to the different constraints and characteristics of different contexts of use. We present CAMM, a meta-model for context-aware adaptation covering different application domains and also a complete adaptation lifecycle. Moreover we also present various instantiations of such a model for different scenarios of a car rental example

    Fusione Termonucleare Controllata - Il punto sulla ricerca

    Get PDF
    Adapting user interfaces to different contexts of use is essential to enhance usability. Adaptation enhances user satisfaction by meeting changing context of use requirements. However, given the variety of contexts of use, and the significant amount of involved information and contextual treatments, transformations of user interface models that consider adaptation become complex. This complexity becomes a challenge when trying to add new adaptation rules or modify the transformation. In this paper, we present “Adapt-first”, an adaptation approach intended to simplify adaptation within model based user interfaces. It capitalizes on differentiating adaptations and concretization via two transformation techniques: Concretization and translation. First-Adapt approach aims at reducing complexity and maintenance efforts of transformations from a model to another
    corecore