1,699 research outputs found

    Usability Evaluation in Virtual Environments: Classification and Comparison of Methods

    Get PDF
    Virtual environments (VEs) are a relatively new type of human-computer interface in which users perceive and act in a three-dimensional world. The designers of such systems cannot rely solely on design guidelines for traditional two-dimensional interfaces, so usability evaluation is crucial for VEs. We present an overview of VE usability evaluation. First, we discuss some of the issues that differentiate VE usability evaluation from evaluation of traditional user interfaces such as GUIs. We also present a review of VE evaluation methods currently in use, and discuss a simple classification space for VE usability evaluation methods. This classification space provides a structured means for comparing evaluation methods according to three key characteristics: involvement of representative users, context of evaluation, and types of results produced. To illustrate these concepts, we compare two existing evaluation approaches: testbed evaluation [Bowman, Johnson, & Hodges, 1999], and sequential evaluation [Gabbard, Hix, & Swan, 1999]. We conclude by presenting novel ways to effectively link these two approaches to VE usability evaluation

    ON GETTING BETTER AND WORKING HARD: USING IMPROVEMENT AS A HEURISTIC FOR JUDGING EFFORT

    Get PDF
    There is a strong conceptual association between improvement and effort. Therefore, we propose that people tend to use improvement as a heuristic for judging effort in others. Hence, they would perceive greater effort in improved performance records than in non-improved records with superior overall performance. To examine whether people use improvement as a heuristic for effort, we compared judgments of effort investments and trait effort in improved and consistently-strong performance profiles with equivalent recent performance. Across six empirical studies, participants thought that those with improved profiles exerted more effort and were more hardworking than those with consistently-strong profiles, and this resulted in a preference for improved candidates when making decisions (e.g., selecting among candidates for a promotion). Even when we introduced manipulations that highlighted strengths of the consistent profiles, participants still made effort judgements in favour of improvement (Studies 2 and 3). Moreover, participants had a greater tendency to mention effort as a reason for selecting an improved (vs. consistently-strong) candidate for an award (Study 4). Furthermore, two studies (Studies 5 and 6) showed that the use of improvement as a heuristic for effort was restricted to contexts with considerable ambiguity. Finally, we examined the overall effects using meta-analyses (Study 7). Overall, the results provided converging evidence that people use improvement as a heuristic for judging effort, particularly in contexts that are relatively ambiguous, and that these judgments can have implications for important decisions

    The Effect of Testing Location and Task Complexity on Usability Testing Performance and User-Reported Stress Levels

    Get PDF
    Usability testing is becoming a more important part of the software design process. New methods allow remote usability testing to occur. Remote testing can be less costly and allow more data to be collected in less time in many cases, provided the user can still provide meaningful data. However, little is known about differences in the user experience between the two testing methods. In an effort to find differences in user experience between remote and traditional website usability testing, this study randomly assigned participants into two groups, one completing a usability test in a traditional lab setting, while the other group utilizing a remote testing location. Both groups completed two tasks, one simple, one complex, using Amazon.com as a test interface. Task time and number of critical incidents reported were the dependent measures. Significant differences were found for task times both in the between and within-subjects conditions for task times. Task times differed significantly between task types; the complex task took generally twice as long as the simple task. No significant differences were found for critical incident reports for both the between and within-subjects conditions. Participants seemed hesitant to report interface problems, preferring to struggle through the task until they satisfied task requirements. Subjective user assessments of the task and website were similar across both conditions. User behavior navigating the site was remarkably similar in both test conditions. Results suggest a similar user testing experience for remote and traditional laboratory usability testing

    Comparison of UX evaluation methods through evaluation of UX of the prototype of a matching platform for the rental housing market in Finland – Sopia.

    Get PDF
    Key concepts of this Master’s thesis are user experience (UX), usability, startup and UX evaluation methods. The research question is how startups should evaluate their website’s UX. To answer it I conducted a study of five UX evaluation methods: heuristic evaluation (HE), cognitive walkthrough (CW), tree testing, system usability scale (SUS), brainstorming through theoretical and empirical analysis. I collected empirical data in two ways. First, I interviewed seven UX field practitioners on their experiences of different UX evaluation methods. Second, I applied the evaluation methods to the website prototype of a digital startup called Sopia. To be able to consistently compare the UX evaluation methods, I created a theory-based framework that includes a set of generic parameters describing evaluation methods, and the constraints of the startup. Based on my findings three UX evaluation methods would be useful in the startup context: heuristic evaluation, cognitive walkthrough and brainstorming. Practitioners tend to select flexible, fast and simple evaluation methods. Cognitive walkthrough and brainstorming match these criteria. Cognitive walkthrough when conducted with potential end users, reveals UX mistakes at an early stage of UX design. Brainstorming carried out within the design team afterwards helps to find resolutions for the revealed usability problems. Heuristic evaluation should not be carried out in its traditional definition with usability experts. However, startups should learn 10 heuristics as 10 usability principles to create the ground of good UX. The key contribution of my study is the framework of Minimum Viable UX Evaluation Methods for Startups. The framework represents the list of necessary UX evaluation tools that each startup, despite time, money and human resource constraints, should follow. Each evaluation method, based on my findings, is unavoidable to help the startup to progress with product development

    Teorías de conexión mediante análisis iterativo

    Get PDF
    An iterative unpacking strategy consists of sequencing empirically-based theoretical developments so that at each step of theorizing one theory serves as an overarching conceptual framework, in which another theory, either existing or emerging, is embedded in order to elaborate on the chosen element(s) of the overarching theory. The strategy is presented in this paper by means of reflections on how it was used in several empirical studies and by means of a non-example. The article concludes with a discussion of affordances and limitations of the strategy.Una estrategia de análisis iterativo consiste en una secuenciación de avances teóricos con base empírica. Así, cada avance en una teoría sirve para organizar un marco conceptual, en el que otra teoría, existente o emergente, queda embebida con el propósito de ampliar los elementos de la teoría global. En este artículo, presentamos esta estrategia por medio de reflexiones sobre cómo se utilizó en varios estudios empíricos y por medio de un no-ejemplo. El artículo concluye con una discusión sobre los puntos fuertes y las limitaciones de la estrategia

    User interfaces for mobile navigation

    Get PDF
    • …
    corecore