216 research outputs found

    Tasks and User Performance Improvement for UUM Online Payment Using Key Stroke Level Model

    Get PDF
    Online payment is one of the components in postgraduate website in University Utara Malaysia (UUM). Not a lot of Student prefers to use this task, this research will focus a weakness points in the current payment model interface and strength points in proposed new online payment model by using Keystroke-Level Model (KLM) technique and improve weakness points in the current payment model interface. The study will be guided by a research question which was formulated as Follows. What is the efficiency problem of online payment that effect user to use the system? .How can the recommended online payment Model achieve efficiency of system and user aim? What is the user performance of current online payment Model to achieve the tasks? The population for this study will be the (undergraduate and postgraduate) students and staff in the University Utara Malaysia (UUM). The quantitative research approach was used since the researcher aimed to explore the important of (KLM) technique to enhance the current online payment model, and increases the acceptance level of the system

    Predicting and Reducing the Impact of Errors in Character-Based Text Entry

    Get PDF
    This dissertation focuses on the effect of errors in character-based text entry techniques. The effect of errors is targeted from theoretical, behavioral, and practical standpoints. This document starts with a review of the existing literature. It then presents results of a user study that investigated the effect of different error correction conditions on popular text entry performance metrics. Results showed that the way errors are handled has a significant effect on all frequently used error metrics. The outcomes also provided an understanding of how users notice and correct errors. Building on this, the dissertation then presents a new high-level and method-agnostic model for predicting the cost of error correction with a given text entry technique. Unlike the existing models, it accounts for both human and system factors and is general enough to be used with most character-based techniques. A user study verified the model through measuring the effects of a faulty keyboard on text entry performance. Subsequently, the work then explores the potential user adaptation to a gesture recognizer’s misrecognitions in two user studies. Results revealed that users gradually adapt to misrecognition errors by replacing the erroneous gestures with alternative ones, if available. Also, users adapt to a frequently misrecognized gesture faster if it occurs more frequently than the other error-prone gestures. Finally, this work presents a new hybrid approach to simulate pressure detection on standard touchscreens. The new approach combines the existing touch-point- and time-based methods. Results of two user studies showed that it can simulate pressure detection more reliably for at least two pressure levels: regular (~1 N) and extra (~3 N). Then, a new pressure-based text entry technique is presented that does not require tapping outside the virtual keyboard to reject an incorrect or unwanted prediction. Instead, the technique requires users to apply extra pressure for the tap on the next target key. The performance of the new technique was compared with the conventional technique in a user study. Results showed that for inputting short English phrases with 10% non-dictionary words, the new technique increases entry speed by 9% and decreases error rates by 25%. Also, most users (83%) favor the new technique over the conventional one. Together, the research presented in this dissertation gives more insight into on how errors affect text entry and also presents improved text entry methods

    Assessing the Accuracy of Task Time Prediction of an Emerging Human Performance Modeling Software - CogTool

    Get PDF
    There is a need for a human performance modeling tool which not only has the ability to accurately estimate skilled user task time for any interface design, but can be used by modelers with little or no programming knowledge and at a minimal cost. To fulfill this need, this research investigated the accuracy of task time prediction of a modeling tool – CogTool - on two versions of an interface design used extensively in the petrochemical industry – DeltaV. CogTool uses the KeyStroke Level Model (KLM) to calculate and generate time predictions based on specified operators. The data collected from a previous study (Koffskey, Ikuma, & Harvey, 2013) that investigated how human participants (24 students and 4 operators) performed on these interfaces (in terms of mean speed in seconds) were compared to CogTool’s numeric time estimate. Three tasks (pump I, pump II and cascade system failures) on each interface for both participant groups were tested on both interfaces (improved and poor), on the general hypothesis that CogTool will make task time predictions for each of the modeled tasks, within a certain range of what actual human participants had demonstrated. The 95% confidence interval (CI) tests of the means were used to determine if the predictions fall within the intervals. The estimated task time from CogTool did not fall within the 95% CI in 9 of 12 cases. Of the 3 that were contained in the acceptable interval, two belonged to the experienced operator group for tasks performed on the improved interface, implying that CogTool was better in predicting the operators’ performance than the students’. A control room monitoring task, by its nature, places great demand on an operator’s mental capacity. This also includes the fact that operators work on multiple screens and/or consoles, sometimes requiring them to commit information to memory that they have to revisit a screen to check on some vital information. In this regard, it is suggested that the one user mental operator for “think time” (estimated as 1.2sec), should be revised in CogTool to accommodate the demand on the operator. For this reason, the present CogTool prediction did not meet expectations in estimating control room operator task time, but it however succeeded in showing where the poor interface could be improved by comparing the detailed steps to the improved interface

    Modeling of Stimulus-Response Secondary Tasks with Different Modalities while Driving in a Computational Cognitive Architecture

    Get PDF
    This paper introduces a computational human performance model based upon the queueing network cognitive architecture to predict driver’s eye glances and workload for four stimulus-response secondary tasks (i.e., auditorymanual, auditory-speech, visual-manual, and visual-speech types) while driving. The model was evaluated with the empirical data from 24 subjects, and the percentage of eyes-off-road time and driver workload generated by the model were similar to the human subject data. Future studies aim to extend the types of voice announcements/commands to enable Human-Machine-Interface (HMI) evaluations with a wider range of usability test for in-vehicle infotainment system developments

    Modeling and predicting mobile phone touchscreen transcription typing using an integrated cognitive architecture

    Get PDF
    This is an Accepted Manuscript of an article published by Taylor & Francis in International Journal of Human–Computer Interaction on 2017-09-07, available online: http://dx.doi.org/10.1080/10447318.2017.1373463Modeling typing performance has values in both the theory and design practice of human-computer interaction. Previous models have simulated desktop keyboard transcription typing performance; however, as the increasing prevalence of smartphones, new models are needed to account for mobile phone touchscreen typing. In the current study, we built a model for mobile phone touchscreen typing in an integrated cognitive architecture and tested the model by comparing simulation results with human results. The results showed that the model could simulate and predict interkey time performance in both number typing (Experiment 1) and sentence typing (Experiment 2) tasks. The model produced results similar to the human data and captured the effects of digit/letter position and interkey distance on interkey time. The current work demonstrated the predictive power of the model without adjusting any parameters to fit human data. The results from this study provide new insights into the mechanism of mobile typing performance and support future work simulating and predicting detailed human performance in more complex mobile interaction tasks

    CogTool+: Modeling human performance at large scale

    Get PDF
    Cognitive modeling tools have been widely used by researchers and practitioners to help design, evaluate and study computer user interfaces (UIs). Despite their usefulness, large-scale modeling tasks can still be very challenging due to the amount of manual work needed. To address this scalability challenge, we propose CogTool+, a new cognitive modeling software framework developed on top of the well-known software tool CogTool. CogTool+ addresses the scalability problem by supporting the following key features: 1) a higher level of parameterization and automation; 2) algorithmic components; 3) interfaces for using external data; 4) a clear separation of tasks, which allows programmers and psychologists to define reusable components (e.g., algorithmic modules and behavioral templates) that can be used by UI/UX researchers and designers without the need to understand the low-level implementation details of such components. CogTool+ also supports mixed cognitive models required for many large-scale modeling tasks and provides an offline analyzer of simulation results. In order to show how CogTool+ can reduce the human effort required for large-scale modeling, we illustrate how it works using a pedagogical example, and demonstrate its s actual performance by applying it to large-scale modeling tasks of two real-world user-authentication systems

    A protocol for evaluating mobile applications

    Get PDF
    The number of applications available for mobile phones is growing at a rate which makes it difficult for new application developers to establish the current state of the art before embarking on new product development. This chapter outlines a protocol for capturing a snapshot of the present state of applications in existence for a given field in terms of both usability and functionality. The proposed methodology is versatile in the sense that it can be implemented for any domain across all mobile platforms, which is illustrated here by its application to two dissimilar domains on three platforms. The chapter concludes with a critical evaluation of the process that was undertaken

    Computer detection of spatial visualization in a location-based task

    Get PDF
    An untapped area of productivity gains hinges on automatic detection of user cognitive characteristics. One such characteristic, spatial visualization ability, relates to users’ computer performance. In this dissertation, we describe a novel, behavior-based, spatial visualization detection technique. The technique does not depend on sensors or knowledge of the environment and can be adopted on generic computers. In a Census Bureau location-based address verification task, detection rates exceeded 80% and approached 90%

    HCI models, theories, and frameworks: Toward a multidisciplinary science

    Get PDF
    Motivation The movement of body and limbs is inescapable in human-computer interaction (HCI). Whether browsing the web or intensively entering and editing text in a document, our arms, wrists, and fingers are at work on the keyboard, mouse, and desktop. Our head, neck, and eyes move about attending to feedback marking our progress. This chapter is motivated by the need to match the movement limits, capabilities, and potential of humans with input devices and interaction techniques on computing systems. Our focus is on models of human movement relevant to human-computer interaction. Some of the models discussed emerged from basic research in experimental psychology, whereas others emerged from, and were motivated by, the specific need in HCI to model the interaction between users and physical devices, such as mice and keyboards. As much as we focus on specific models of human movement and user interaction with devices, this chapter is also about models in general. We will say a lot about the nature of models, what they are, and why they are important tools for the research and development of humancomputer interfaces. Overview: Models and Modeling By its very nature, a model is a simplification of reality. However a model is useful only if it helps in designing, evaluating, or otherwise providing a basis for understanding the behaviour of a complex artifact such as a computer system. It is convenient to think of models as lying in a continuum, with analogy and metaphor at one end and mathematical equations at the other. Most models lie somewhere in-between. Toward the metaphoric end are descriptive models; toward the mathematical end are predictive models. These two categories are our particular focus in this chapter, and we shall visit a few examples of each. Two models will be presented in detail and in case studies: Fitts' model of the information processing capability of the human motor system and Guiard's model of bimanual control. Fitts' model is a mathematical expression emerging from the rigors of probability theory. It is a predictive model at the mathematical end of the continuum, to be sure, yet when applied as a model of human movement it has characteristics of a metaphor. Guiard's model emerged from a detailed analysis of how human's use their hands in everyday tasks, such as writing, drawing, playing a sport, or manipulating objects. It is a descriptive model, lacking in mathematical rigor but rich in expressive power

    Computational Modeling and Experimental Research on Touchscreen Gestures, Audio/Speech Interaction, and Driving

    Full text link
    As humans are exposed to rapidly evolving complex systems, there are growing needs for humans and systems to use multiple communication modalities such as auditory, vocal (or speech), gesture, or visual channels; thus, it is important to evaluate multimodal human-machine interactions in multitasking conditions so as to improve human performance and safety. However, traditional methods of evaluating human performance and safety rely on experimental settings using human subjects which require costly and time-consuming efforts to conduct. To minimize the limitations from the use of traditional usability tests, digital human models are often developed and used, and they also help us better understand underlying human mental processes to effectively improve safety and avoid mental overload. In this regard, I have combined computational cognitive modeling and experimental methods to study mental processes and identify differences in human performance/workload in various conditions, through this dissertation research. The computational cognitive models were implemented by extending the Queuing Network-Model Human Processor (QN-MHP) Architecture that enables simulation of human multi-task behaviors and multimodal interactions in human-machine systems. Three experiments were conducted to investigate human behaviors in multimodal and multitasking scenarios, combining the following three specific research aims that are to understand: (1) how humans use their finger movements to input information on touchscreen devices (i.e., touchscreen gestures), (2) how humans use auditory/vocal signals to interact with the machines (i.e., audio/speech interaction), and (3) how humans drive vehicles (i.e., driving controls). Future research applications of computational modeling and experimental research are also discussed. Scientifically, the results of this dissertation research make significant contributions to our better understanding of the nature of touchscreen gestures, audio/speech interaction, and driving controls in human-machine systems and whether they benefit or jeopardize human performance and safety in the multimodal and concurrent task environments. Moreover, in contrast to the previous models for multitasking scenarios mainly focusing on the visual processes, this study develops quantitative models of the combined effects of auditory, tactile, and visual factors on multitasking performance. From the practical impact perspective, the modeling work conducted in this research may help multimodal interface designers minimize the limitations of traditional usability tests and make quick design comparisons, less constrained by other time-consuming factors, such as developing prototypes and running human subjects. Furthermore, the research conducted in this dissertation may help identify which elements in the multimodal and multitasking scenarios increase workload and completion time, which can be used to reduce the number of accidents and injuries caused by distraction.PHDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143903/1/heejinj_1.pd
    • …
    corecore