9 research outputs found
Heuristics for evaluating the usability of CAA applications
Evaluation of usability is well researched in the area of HCI. One widely used
method is a heuristic evaluation which relies on a small number of evaluators
inspecting an interface to see to what extent it complies with a set of
heuristics. Once a problem is identified it is categorised to a heuristic and a
severity rating is attached. Severity ratings indicate the potential impact of the
problem.
Using a corpus of usability problems within CAA this paper reports on the
development of domain specific heuristics and severity ratings for evaluating
the usability of CAA applications. The heuristics are presented and the paper
concludes with practical guidance on the application of the method in CAA
The Effectiveness of Australian Medical Portals: Are They Meeting the Health Consumers’ Needs?
The move to using portals to distribute medical information is supported by Australian Governments and government agencies. The recent success of ‘telemedicine’ is promising for patients and governments alike as it could provide quality care and convenience for patients and reduces the burden on the health budget for governments. The Australian Government is taking a proactive role in developing medical portals to encourage the general use of the web for the dissemination of medical information (NHIMAC, 2000). Government portals such as HealthInsite (Australian) and BetterHealth (Australian Victorian Government) encourage users to access the sites (NHIMAC, 2000).. Despite the support by governments, usability tests examining portal effectiveness indicate that many portals are not effective for users. This paper presents the results of usability testing conducted on current Australian medical portals and discusses the portals’ effectiveness from the users’ perspective. The paper also discusses current technology that could improve medical portals’ effectiveness thereby better serving the needs of the health consumer
Effect of user sessions on the heuristic usability method
Heuristic evaluation (HE) is a widely used method for assessing software systems. Several studies have sought to improve the effectiveness of HE by developing its heuristics and procedures. However, few studies have involved the end-user, and to the best of the authors' knowledge, no HE studies involving end-users with non-expert evaluators have been reported. Therefore, the aim of this study is to investigate the impact of end-users on the results obtained by a non-expert evaluator within the HE process, and through that, to explore the number of usability problems and their severity. This article proposes introducing two sessions within the HE process: a user exploration session (UES-HE) and a user review session (URS-HE). The outcomes are compared with two solid benchmarks in the usability-engineering field: the traditional HE and the usability testing (UT) methods. The findings show that the end-user has a significant impact on non-expert evaluator results in both sessions. In the UES-HE method, the results outperformed all usability evaluation methods (UEMs) regarding the usability problems identified, and it tended to identify more major, minor, and cosmetic problems than other methods.</jats:p
Supporting Heuristic Evaluation for the Web
Web developers are confronted with evaluating the usability of Web interfaces.
Automatic Web usability evaluation tools are available, but they are limited in the types
of problems they can handle. Tool support for manual usability evaluation is needed.
Accordingly, this research focuses on developing a tool for supporting manual processes
in Heuristic Evaluation inspection.
The research was conveyed in three phases. First, an observational study was
conducted in order to characterize the inspection process in Heuristic Evaluation. The
videos of evaluators applying a Heuristic Evaluation on a non-interactive, paper-based
Web interface were analyzed to dissect the inspection process. Second, based on the
study, a tool for annotating Web interfaces when applying Heuristic Evaluations was
developed. Finally, a survey is conducted to evaluate the tool and learn the role of
annotations in inspection. Recommendations for improving the use of annotations in
problem reporting are outlined. Overall, users were satisfied with the tool.
The goal of this research, designing and developing an inspection tool, is
achieved
Facilitating heuristic evaluation for novice evaluators
Heuristic evaluation (HE) is one of the most widely used usability evaluation methods. The reason for its popularity is that it is a discount method, meaning that it does not require substantial time or resources, and it is simple, as evaluators can evaluate a system guided by a set of usability heuristics. Despite its simplicity, a major problem with HE is that there is a significant gap in the quality of results produced by expert and novice evaluators. This gap has made some scholars question the usefulness of the method as they claim that the evaluation results are a product of the evaluator’s experience rather than the method itself.
In response, the goal of this thesis is to bridge the gap between expert and novice evaluators. Based on interviews with 15 usability experts, which focused on their experience with the method, the difficulties they faced when they were novices, and how they overcame such difficulties, it presents a comprehensive protocol called Coherent Heuristic Evaluation (CoHE). This step-by-step protocol guides novice evaluators from the moment they decide to conduct an evaluation until the submission of their evaluation report.
This protocol was verified by conducting an experiment to observe the difference between novices using the CoHE protocol and novices using Nielsen’s 10 usability heuristics without the guidance. The experiment involved 20 novices performing two sessions; the first was an understanding session where the novices read and understood the heuristics and the second was an inspecting session where they inspected a system. The findings show that, while evaluators take more time to read and evaluate a system using CoHE, they tend to identify more problems. The experiment also demonstrated that CoHE can improve the thoroughness, effectiveness, and f-measure of evaluation. However, the validity of CoHE was comparable to that of HE
Evidence Based Design of Heuristics: Usability and Computer Assisted Assessment
The research reported here examines the usability of Computer Assisted Assessment(CAA) and the development of domain specific heuristics. CAA is being adopted within educational institutions and the pedagogical implications are widely investigated, but little research has been conducted into the usability of CAA applications.
The thesis is: severe usability problems exist in GAA applications causing unacceptable consequences, and that using an evidence based design approach GAA heuristics can be devised The thesis reports a series of evaluations that show severe usability problems do occur in three CAA applications. The process of creating domain specific heuristics is analysed, critiqued and a novel evidence based design approach for the design of domain specific heuristics is proposed. Gathering evidence from evaluations and the literature, a set of heuristics for CAA are presented. There are four main contributions to knowledge in the thesis: the heuristics; the corpus of usability problems; the Damage Index for prioritising usability problems from multiple evaluations and the evidence based design approach to synthesise heuristics.
The focus of the research evolves with the first objective being to determine If severe usability problems exist that can cause users d?ffIculties and dissatisfaction with unacceptable consequences whitct using existing commercial CAA software applications? Using a survey methodology, students' report a level of satisfaction but due to low inter-group consistency surveys are judged to be ineffective at eliciting usability problems. Alternative methods are analysed and the heuristic evaluation method is judged to be suitable. A study is designed to evaluate Nielsen's heuristic set within the CAA domain and they are deemed to be ineffective based on the formula proposed by Hanson et al. (2003). Domain specific heuristics are therefore necessary and further studies are designed to build a corpus of usability problems to facilitate
the evidence based design approach to synthesise a set of heuristics, in order to aggregate the corpus and prioritise the severity of the problems a Damage Index formula is devised.
The work concludes with a discussion of the heuristic design methodology and potential for future work; this includes the application of the CAA heuristics and applying the heuristic design methodology to other specific domains
Aportaciones para la mejora de la usabilidad de las interfaces de los objetos docentes en el m-learnig
Marcos Ortega, Luis de, codir.El incremento del uso de los dispositivos móviles en los últimos aǫs y la revolución que ha supuesto el e-learning en el ámbito de la enseąnza han propiciado la aparición del m-learning. Por otro lado, tradicionalmente los dispositivos con un pequeǫ tamaǫ de pantalla han sufrido algunos problemas relacionados con la usabilidad, y los dispositivos móviles no son una excepción. Por ello, es importante tener en cuenta la usabilidad en estos dispositivos, y más aun cuando se trata de enseąnza virtual a través de los mismos, ya que una mala usabilidad podría provocar que los alumnos estuvieran constantemente distraídos con la interfaz (por ejemplo, intentando encontrar una determinada opción) y no centrados en lo realmente importante: el aprendizaje. El objetivo planteado en la presente tesis es obtener un conjunto de directrices de usabilidad válido para las interfaces de los objetos docentes basados en web para dispositivos móviles. Para conseguir dicho objetivo, se propone una metodología que consiste en partir de un conjunto de directrices fuente diseądas para páginas web de PC, hacer una evaluación experta y llevar a cabo una experimentación, de manera que al final se consiguen obtener unas directrices válidas para objetos docentes basados en web para dispositivos móviles. Esta metodología se aplica a la ISO 9241-151 y se consigue obtener como resultado un conjunto de directrices válidas para los objetos docentes basados en web para dispositivos móviles
Aportaciones para la mejora de la usabilidad de las interfaces de los objetos docentes en el m-learnig
Marcos Ortega, Luis de, codir.El incremento del uso de los dispositivos móviles en los últimos aǫs y la revolución que ha supuesto el e-learning en el ámbito de la enseąnza han propiciado la aparición del m-learning. Por otro lado, tradicionalmente los dispositivos con un pequeǫ tamaǫ de pantalla han sufrido algunos problemas relacionados con la usabilidad, y los dispositivos móviles no son una excepción. Por ello, es importante tener en cuenta la usabilidad en estos dispositivos, y más aun cuando se trata de enseąnza virtual a través de los mismos, ya que una mala usabilidad podría provocar que los alumnos estuvieran constantemente distraídos con la interfaz (por ejemplo, intentando encontrar una determinada opción) y no centrados en lo realmente importante: el aprendizaje. El objetivo planteado en la presente tesis es obtener un conjunto de directrices de usabilidad válido para las interfaces de los objetos docentes basados en web para dispositivos móviles. Para conseguir dicho objetivo, se propone una metodología que consiste en partir de un conjunto de directrices fuente diseądas para páginas web de PC, hacer una evaluación experta y llevar a cabo una experimentación, de manera que al final se consiguen obtener unas directrices válidas para objetos docentes basados en web para dispositivos móviles. Esta metodología se aplica a la ISO 9241-151 y se consigue obtener como resultado un conjunto de directrices válidas para los objetos docentes basados en web para dispositivos móviles