6 research outputs found
Heuristic evaluation: Comparing ways of finding and reporting usability problems
Research on heuristic evaluation in recent years has focused on improving its effectiveness and efficiency with respect to user testing. The aim of this paper is to refine a research agenda for comparing and contrasting evaluation methods. To reach this goal, a framework is presented to evaluate the effectiveness of different types of support for structured usability problem reporting. This paper reports on an empirical study of this framework that compares two sets of heuristics, Nielsen's heuristics and the cognitive principles of Gerhardt-Powals, and two media of reporting a usability problem, i.e. either using a web tool or paper. The study found that there were no significant differences between any of the four groups in effectiveness, efficiency and inter-evaluator reliability. A more significant contribution of this research is that the framework used for the experiments proved successful and should be reusable by other researchers because of its thorough structur
Recommended from our members
AID: An automated detector for gender-inclusivity bugs in OSS project pages
The tools and infrastructure used in tech, including Open Source Software (OSS), can embed âinclusivity bugsââfeatures that disproportionately disadvantage particular groups of contributors. To see whether OSS developers have existing practices to ward off such bugs, we surveyed 266 OSS developers. Our results show that a majority (77%) of developers do not use any inclusivity practices, and 92% of respondents cited a lack of concrete resources to enable them to do so. To help ïŹll this gap, this paper introduces AID, a tool that automates the GenderMag method to systematically ïŹnd gender-inclusivity bugs in software. We then present the results of the toolâs evaluation on 20 GitHub projects. The tool achieved precision of 0.69, recall of 0.92, an F-measure of 0.79 and even captured some inclusivity bugs that human GenderMag teams missed.Index TermsâGender inclusivity, automation, open source, information processin
Supporting Heuristic Evaluation for the Web
Web developers are confronted with evaluating the usability of Web interfaces.
Automatic Web usability evaluation tools are available, but they are limited in the types
of problems they can handle. Tool support for manual usability evaluation is needed.
Accordingly, this research focuses on developing a tool for supporting manual processes
in Heuristic Evaluation inspection.
The research was conveyed in three phases. First, an observational study was
conducted in order to characterize the inspection process in Heuristic Evaluation. The
videos of evaluators applying a Heuristic Evaluation on a non-interactive, paper-based
Web interface were analyzed to dissect the inspection process. Second, based on the
study, a tool for annotating Web interfaces when applying Heuristic Evaluations was
developed. Finally, a survey is conducted to evaluate the tool and learn the role of
annotations in inspection. Recommendations for improving the use of annotations in
problem reporting are outlined. Overall, users were satisfied with the tool.
The goal of this research, designing and developing an inspection tool, is
achieved