587 research outputs found
Scoping analytical usability evaluation methods: A case study
Analytical usability evaluation methods (UEMs) can complement empirical evaluation of systems: for example, they can often be used earlier in design and can provide accounts of why users might experience difficulties, as well as what those difficulties are. However, their properties and value are only partially understood. One way to improve our understanding is by detailed comparisons using a single interface or system as a target for evaluation, but we need to look deeper than simple problem counts: we need to consider what kinds of accounts each UEM offers, and why. Here, we report on a detailed comparison of eight analytical UEMs. These eight methods were applied to it robotic arm interface, and the findings were systematically compared against video data of the arm ill use. The usability issues that were identified could be grouped into five categories: system design, user misconceptions, conceptual fit between user and system, physical issues, and contextual ones. Other possible categories such as User experience did not emerge in this particular study. With the exception of Heuristic Evaluation, which supported a range of insights, each analytical method was found to focus attention on just one or two categories of issues. Two of the three "home-grown" methods (Evaluating Multimodal Usability and Concept-based Analysis of Surface and Structural Misfits) were found to occupy particular niches in the space, whereas the third (Programmable User Modeling) did not. This approach has identified commonalities and contrasts between methods and provided accounts of why a particular method yielded the insights it did. Rather than considering measures such as problem count or thoroughness, this approach has yielded insights into the scope of each method
Methodological development
Book description: Human-Computer Interaction draws on the fields of computer science, psychology, cognitive science, and organisational and social sciences in order to understand how people use and experience interactive technology. Until now, researchers have been forced to return to the individual subjects to learn about research methods and how to adapt them to the particular challenges of HCI. This is the first book to provide a single resource through which a range of commonly used research methods in HCI are introduced. Chapters are authored by internationally leading HCI researchers who use examples from their own work to illustrate how the methods apply in an HCI context. Each chapter also contains key references to help researchers find out more about each method as it has been used in HCI. Topics covered include experimental design, use of eyetracking, qualitative research methods, cognitive modelling, how to develop new methodologies and writing up your research
Layered evaluation of interactive adaptive systems : framework and formative methods
Peer reviewedPostprin
Master of Science
thesisElectronic Health Record (EHR) adoption rates have been low in the United States. A key reason for this low adoption rate is poor EHR usability. Currently no standards exist for design, testing and monitoring the usability of EHRs. Therefore, we conducted a usability evaluation of a vendor's product in the Emergency Department at the University of Utah. In the first objective of this study, we evaluated a newly implemented computerized provider order entry application. Four usability experts used the Zhang et al 14 heuristics and 23 predefined tasks to perform the evaluation. The experts found 48 usability problems categorized into 51 heuristic violations. There were 4 cosmetic, 120 minor, 64 major, and 4 catastrophic problems identified. The interrater reliability was 0.81 using Fleis' Kappa, showing a high level of consistency in ratings across evaluators. For the second objective, we used an electronic version of Questionnaire of User Interaction Satisfaction (QUIS 7.0) to evaluate physician satisfaction with the CPOE application in the ED. The physician response rate was 50% (25/50). The total survey mean was 4.87, lower than the -a priori‖ definition for acceptable satisfaction score of 5.0 (of a possible 9). The lowest scale scores were for overall user reaction and learning iv and the highest were for screen, terminology and system capabilities. Further analyses were completed to determine any differences for satisfaction scores between physician trainees and attending. A multifactor ANOVA was performed to examine the combined effect of the different experience levels and sections of the QUIS. The results were significant at -1.43 (p < 0.05) for screen and terminology and system capabilities. In this setting, the ED CPOE application had a high level of usability issues and low mean satisfaction scores among physician end-users. The responsibility for improved usability lies with both vendors developing the product and facilities implementing the product and both should be educated on usability principles. The combination of a user-based and expert-based inspection method yielded congruent findings and was an accurate and efficient means of evaluation
Recommended from our members
The use of tagging to support the authoring of personalisable learning content
This research project is interested in the area of personalised and adaptable learning and in particular within an e-learning context. Brusilovsky (1996) and Santally (2005) stress the importance of adaptive systems within e-learning. Karagiannikis and Sampson et al. (2004) argue that personalised learning systems can be defined by their capability to adapt automatically to the changing attitudes of the “learning experience” which can, in turn, be defined by the individual learner characteristics, for example the type of learning material.
The project evolved to cover areas including personalised learning, e-learning environments, authoring tools, tagging, learning objects, learning theories and learning styles. The main focus at the start of the project was to provide a personalised and adaptable learning environment for students based on their learning style. During the research, this led to a specific interest about how an academic can create, tag and author learning objects to provide the capability of personalised adaptable e-learning for a learner.
Research undertaken was designed to gain an understanding of personalised and adaptive learning techniques, e-learning tools and learning styles. Important findings of this research showed that e-learning platforms do not offer much in the way of a personalised learning experience for a learner. Additionally, the research showed that general adaptive systems and adaptive systems incorporating learning styles are not commonly used or available due to issues with flexibility, reuse and integration.
The concept of tagging was investigated during the research and it was found that tagging is underused within e-learning, although the research shows that it could be a good ‘fit’ within e-learning. This therefore led to the decision to create a general purpose discriminatory tagging methodology to allow authors to tag learning objects for personalisation and reuse. The main focus for the evaluation of this tagging methodology was the authoring side of the tagging. It was found that other research projects have evaluated the personalisation of learning content based on a learner’s learning style (see Graf and Kinshuk (2007)). It was therefore felt that there was a sufficient body of existing evidence in this area whereas there was limited research available on the authoring side.
The evaluation of the discriminatory tagging methodology demonstrated that the methodology could allow for any discrimination between learners to be used. The example demonstrated within this thesis includes discriminating according to a learner’s learning style and accessibility type. This type of platform independent flexible discriminatory methodology does not exist within current e-learning platforms or other e-learning systems. Therefore, the main contribution of this thesis is therefore a platform independent general-purpose discriminatory tagging methodology
Usability evaluation of a web-based e-learning application: a study of two evaluation methods
Despite widespread use of web-based e-learning applications, insufficient attention is paid to their usability. There is a need to conduct evaluation using one or more of the various usability evaluation methods. Given that heuristic evaluation is known to be easy to use and cost effective, this study investigates the extent to which it can identify usability problems in a web-based e-learning application at a tertiary institution. In a comparative case study, heuristic evaluation by experts and survey evaluation among end users (learners) are conducted and the results of the two compared.
Following literature studies in e-learning - particularly web-based learning - and usability, the researcher generates an extensive set of criteria/heuristics and uses it in the two evaluations. The object of evaluation is a website for a 3rd year Information Systems course. The findings indicate a high correspondence between the results of the two evaluations, demonstrating that heuristic evaluation is an appropriate, effective and sufficient usability evaluation method, as well as relatively easy to conduct. It identified a high percentage of usability problems.ComputingM.Sc. (Information Systems
Recommended from our members
Navigation and wayfinding in learning spaces in 3D virtual worlds
There is a lack of published research on the design guidelines of learning spaces in virtual worlds. Therefore, when institutions aspire to create learning spaces in Second Life, there are few studies or guidelines to inform them except for individual case studies. The Design of Learning Spaces in 3D Virtual Environments (DELVE) project, funded by the Joint Information Systems Committee in the UK, was one of the first initiatives that identified through empirical investigations the usability problems associated with learning spaces in virtual worlds and the potential impact on student experience. The findings of the DELVE project revealed that applying architectural principles of real-world designs to virtual worlds may not be sufficient. In fact, design principles from urban planning, Human–Computer Interaction (HCI), web usability, geography, and psychology influence the design of learning spaces in virtual worlds.
In DELVE, the researchers derived several usability guidelines: form should follow function, that is, that the shape of a building or object should be primarily based upon its intended function or purpose; use real-world metaphors such as mailboxes for students to leave messages, or search pods similar to real-world information kiosks; consider realism for familiarity and comfort; design for storytelling; or design to orient the user at the landing point, etc. However, the investigations in DELVE identified that the key usability problems experienced by users in 3D learning spaces are related to navigation and wayfinding.
In this chapter, we report on the Navigation and Wayfinding (NAVY) project which builds on the findings of the DELVE project. As the most commonly used virtual world for education, Second Life was the logical choice for conducting the NAVY project research. Based upon empirical investigations of a number of islands in Second Life (an island is a space which is analogous to a website in a 2D environment) involving user-based studies, heuristic evaluations, and iterative reviews of the heuristics by usability experts, we have derived over 200 guidelines for the design of learning spaces in virtual worlds.
Recommended from our members
GenderMag: A Method for Evaluating Software’s Gender Inclusiveness
In recent years, research into gender differences has established that individual differences in how people problem-solve often cluster by gender. Research also shows that these differences have direct implications for software that aims to support users’ problem-solving activities, and that much of this software is more supportive of problem-solving processes favored (statistically) more by males than by females. However, there is almost no work considering how software practitioners—such as User Experience (UX) professionals or software developers—can find gender-inclusiveness issues like these in their software. To address this gap, we devised the GenderMag method for evaluating problem-solving software from a gender-inclusiveness perspective. The method includes a set of faceted personas that bring five facets of gender difference research to life, and embeds use of the personas into a concrete process through a gender-specialized Cognitive Walkthrough. Our empirical results show that a variety of practitioners who design software—without needing any background in gender research—were able to use the GenderMag method to find gender-inclusiveness issues in problem-solving software. Our results also show that the issues the practitioners found were real and fixable. This work is the first systematic method to find gender-inclusiveness issues in software, so that practitioners can design and produce problem-solving software that is more usable by everyone
Teachers’ Perceptions Of The Observation, Coaching, And Feedback Cycle
The purpose of this qualitative case study is to investigate teachers’ perceptions, attitudes, and viewpoints of how their daily teaching may be refined after implementing feedback from the Observation Coaching Feedback Cycle (OCFC) into their daily instruction. In direct connection, this study’s purpose seeks to fill a gap in literature regarding teachers’ perceptions of the OCFC experience. Reflective Practice Theory was selected as the Conceptual Framework that guided this study. Reflective practice is essential to understand one’s actions so as to engage in a process of continuous learning. Without reflective processes, people would not amend their work (Helyer, 2015). The whole premise of the evaluation process and is to encourage change and is based upon the idea that teachers would like to learn more and change their practice to best serve their students. Data were composed of survey evaluations and in-depth teacher interviews, which were analyzed for content relevant to the research questions. Through this case study, five primary themes of evaluators demonstrated the following: knowledge of content they are observing, relationships impacting the OCFC, professional growth, frequency of observation, perceptions of OCFC emerged with 5 emergent subthemes. Findings may be useful for district administrators, K-12 school systems, classroom teachers, and special area teachers such as teachers of Art, Music, Health and Physical Education and Career Technical Subjects
- …