116,648 research outputs found

    Estimating Sample Size for Usability Testing

    Get PDF
    One strategy used to assure that an interface meets user requirements is to conduct usability testing. When conducting such testing one of the unknowns is sample size. Since extensive testing is costly, minimizing the number of participants can contribute greatly to successful resource management of a project. Even though a significant number of models have been proposed to estimate sample size in usability testing, there is still not consensus on the optimal size. Several studies claim that 3 to 5 users suffice to uncover 80% of problems in a software interface. However, many other studies challenge this assertion. This study analyzed data collected from the user testing of a web application to verify the rule of thumb, commonly known as the “magic number 5”. The outcomes of the analysis showed that the 5-user rule significantly underestimates the required sample size to achieve reasonable levels of problem detection

    Estimating Sample Size for Usability Testing

    Get PDF
    One strategy used to assure that an interface meets user requirements is to conduct usability testing. When conducting such testing one of the unknowns is sample size. Since extensive testing is costly, minimizing the number of participants can contribute greatly to successful resource management of a project. Even though a significant number of models have been proposed to estimate sample size in usability testing, there is still not consensus on the optimal size. Several studies claim that 3 to 5 users suffice to uncover 80% of problems in a software interface. However, many other studies challenge this assertion. This study analyzed data collected from the user testing of a web application to verify the rule of thumb, commonly known as the “magic number 5”. The outcomes of the analysis showed that the 5-user rule significantly underestimates the required sample size to achieve reasonable levels of problem detection

    Reviewing and extending the five-user assumption: A grounded procedure for interaction evaluation

    Get PDF
    " © ACM, 2013. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Computer-Human Interaction (TOCHI), {VOL 20, ISS 5, (November 2013)} http://doi.acm.org/10.1145/2506210 "The debate concerning how many participants represents a sufficient number for interaction testing is well-established and long-running, with prominent contributions arguing that five users provide a good benchmark when seeking to discover interaction problems. We argue that adoption of five users in this context is often done with little understanding of the basis for, or implications of, the decision. We present an analysis of relevant research to clarify the meaning of the five-user assumption and to examine the way in which the original research that suggested it has been applied. This includes its blind adoption and application in some studies, and complaints about its inadequacies in others. We argue that the five-user assumption is often misunderstood, not only in the field of Human-Computer Interaction, but also in fields such as medical device design, or in business and information applications. The analysis that we present allows us to define a systematic approach for monitoring the sample discovery likelihood, in formative and summative evaluations, and for gathering information in order to make critical decisions during the interaction testing, while respecting the aim of the evaluation and allotted budget. This approach – which we call the ‘Grounded Procedure’ – is introduced and its value argued.The MATCH programme (EPSRC Grants: EP/F063822/1 EP/G012393/1

    Toward a document evaluation methodology: What does research tell us about the validity and reliability of evaluation methods?

    Get PDF
    Although the usefulness of evaluating documents has become generally accepted among communication professionals, the supporting research that puts evaluation practices empirically to the test is only beginning to emerge. This article presents an overview of the available research on troubleshooting evaluation methods. Four lines of research are distinguished concerning the validity of evaluation methods, sample composition, sample size, and the implementation of evaluation results during revisio

    Usability engineering for GIS: learning from a screenshot

    Get PDF
    In this paper, the focus is on the concept of Usability Engineering for GIS – a set of techniques and methods that are especially suitable for evaluating the usability of GIS applications – which can be deployed as part of the development process. To demonstrate how the framework of Usability Engineering for GIS can be used in reality, a screenshot study is described. Users were asked to provide a screenshot of their GIS during their working day. The study shows how a simple technique can help in understanding the way GIS is used in situ

    Survey of Web Developers in Academic Libraries

    Get PDF
    A survey was sent to library web designers from randomly selected institutions to determine the background, tools, and methods used by those designers. Results, grouped by Carnegie Classification type, indicated that larger schools were not necessarily working with more resources or more advanced levels of technology than other institutions

    A comprehensive study of the usability of multiple graphical passwords

    Get PDF
    Recognition-based graphical authentication systems (RBGSs) using images as passwords have been proposed as one potential solution to the need for more usable authentication. The rapid increase in the technologies requiring user authentication has increased the number of passwords that users have to remember. But nearly all prior work with RBGSs has studied the usability of a single password. In this paper, we present the first published comparison of the usability of multiple graphical passwords with four different image types: Mikon, doodle, art and everyday objects (food, buildings, sports etc.). A longi-tudinal experiment was performed with 100 participants over a period of 8 weeks, to examine the usability performance of each of the image types. The re-sults of the study demonstrate that object images are most usable in the sense of being more memorable and less time-consuming to employ, Mikon images are close behind but doodle and art images are significantly inferior. The results of our study complement cognitive literature on the picture superiority effect, vis-ual search process and nameability of visually complex images

    Role of Computerized Physician Order Entry Usability in the Reduction of Prescribing Errors

    Get PDF
    Some hospitals have implemented computerized physician order entry (CPOE) systems to reduce the medical error rates. However, research in this area has been very limited, especially regarding the impact of CPOE use on the reduction of prescribing errors. Moreover, the past studies have dealt with the overall impact of CPOE on the reduction of broadly termed "medical errors", and they have not specified which medical errors have been reduced by CPOE. Furthermore, the majority of the past research in this field has been either qualitative or has not used robust empirical techniques. This research examined the impacts of usability of CPOE systems on the reduction of doctors' prescribing errors. Methods: One hundred and sixty-six questionnaires were used for quantitative data analyses. Since the data was not normally distributed, partial least square path modelling-as the second generation of multivariate data analyses-was applied to analyze data. Results: It was found that the ease of use of the system and information quality can significantly reduce prescribing errors. Moreover, the user interface consistency and system error prevention have a significant positive impact on the perceived ease of use. More than 50% of the respondents believed that CPOE reduces the likelihood of drug allergy, drug interaction, and drug dosing errors thus improving patient safety. Conclusions: Prescribing errors in terms of drug allergy, drug interaction, and drug dosing errors are reduced if the CPOE is not error-prone and easy to use, if the user interface is consistent, and if it provides quality information to doctors
    corecore