357 research outputs found
Argumentation Models for Usability Problem Analysis in Individual and Collaborative Settings
Post-print (lokagerĂ° höfunda)Consolidating usability problems (UPs) from problem lists from several users can be a cognitively demanding task for evaluators. It has been suggested that collaboration between evaluators can help this process. In an attempt to learn how evaluators make decisions in this process, the authors studied what justification evaluators give for extracting UPs and their consolidation when working both individually and collaboratively. An experiment with eight novice usability evaluators was carried out where they extracted UPs and consolidated them individually and then collaboratively. The data were analyzed by using conventional content analysis and by creating argumentation models according to the Toulmin model. The results showed that during UP, extraction novice usability evaluators could put forward warrants leading to clear claims when probed but seldom added qualifiers or rebuttals. Novice usability evaluators could identify predefined criteria for a UP when probed and this could be acknowledged as a backing to warrants. In the individual settings, novice evaluators had difficulty in presenting claims and warrants for their decisions on consolidation. Although further study is needed, the results of the study indicated that collaborating pairs had a tendency to argue slightly better than individuals. Through the experiment novice evaluatorsâ reasoning patterns during problem extraction and consolidation as well as during their assessment of severity and confidence could be identified.Peer Reviewe
Novice evaluators' behavior when consolidating usability problems individually or collaboratively
Publisher's version (Ăștgefin grein)An important, but resource demanding step in analyzing observations from usability evaluations is to consolidate usability problems (UPs) that were identified by several evaluators into one master list. An open question is whether consolidating UPs in pairs is cost-effective. A within-subject study examined if evaluators merge UPs differently when working in pairs than individually and what motivates their decisions. Eight novice evaluators took part. The number of discarded, retained and merged UPs, evaluators' confidence and severity of UPs in the two settings were measured. The results showed that UPs merged or discarded in the collaborative setting would rather be retained in the individual setting. Participants increased confidence and UP severity in the collaborative setting but decreased UP severity and confidence in the individual setting.Peer Reviewe
Assessing the Semiotic Inspection Method â The Evaluatorsâ Perspective
This paper presents an assessment of the Semiotic Inspection Methodaimed at understanding its costs, benefits, advantages and disadvantages from the evaluatorsâperspective. We applied a questionnaire to novice evaluators and interviewedthe authors of the method (representing the expertsâ evaluators). An analysis of theresponses shows interesting insights and characteristics of the method
Recommended from our members
The playthrough evaluation framework reliable usability evaluation for video games
This thesis presents the playthrough evaluation framework, a novel framework for the reliable usability evaluation of first-person shooter console video games. The framework includes playthrough evaluation, a structured usability evaluation method adapted from heuristic evaluation.
Usability evaluation can help guide developers by pointing out design issues that cause users problems. However, usability evaluation methods suffer from the evaluator effect, where separate evaluations of the same data do not produce reliably consistent results. This can
result in a number of undesirable consequences affecting issues such as:
âą Unreliable evaluation: Without reliable results, evaluation reports risk giving incorrect or misleading advice.
âą Weak methodological validation: Typically new methods (e.g., new heuristics) are validated against user tests. However, without a reliable means to describe observations, attempts to validate novel methods against user test data will also be affected by weak reliability.
The playthrough evaluation framework addresses these points through a series of studies presenting the need for, and showing the development of the framework, including the following stages,
1. Explication of poor reliability in heuristic evaluation.
2. Development and validation of a reliable user test coding scheme.
3. Derivation of a novel usability evaluation method, playthrough evaluation.
4. Testing the method, quantifying results.
Evaluations were conducted with 22 participants, on 3 first-person shooter action console video games, using two methodologies, heuristic evaluation and the novel playthrough evaluation developed in this thesis. Both methods proved effective, with playthrough evaluation providing more detailed analysis but requiring more time to conduct
A Comparison of Quantitative and Qualitative Data from a Formative Usability Evaluation of an Augmented Reality Learning Scenario
The proliferation of augmented reality (AR) technologies creates opportunities for the devel-opment of new learning scenarios. More recently, the advances in the design and implementation of desktop AR systems make it possible the deployment of such scenarios in primary and secondary schools. Usability evaluation is a precondition for the pedagogical effectiveness of these new technologies and requires a systematic approach for finding and fixing usability problems. In this paper we present an approach to a formative usability evaluation based on heuristic evaluation and user testing. The basic idea is to compare and integrate quantitative and qualitative measures in order to increase confidence in results and enhance the descriptive power of the usability evaluation report.augmented reality, multimodal interaction, e-learning, formative usability evaluation, user testing, heuristic evaluation
Evidence Based Design of Heuristics: Usability and Computer Assisted Assessment
The research reported here examines the usability of Computer Assisted Assessment(CAA) and the development of domain specific heuristics. CAA is being adopted within educational institutions and the pedagogical implications are widely investigated, but little research has been conducted into the usability of CAA applications.
The thesis is: severe usability problems exist in GAA applications causing unacceptable consequences, and that using an evidence based design approach GAA heuristics can be devised The thesis reports a series of evaluations that show severe usability problems do occur in three CAA applications. The process of creating domain specific heuristics is analysed, critiqued and a novel evidence based design approach for the design of domain specific heuristics is proposed. Gathering evidence from evaluations and the literature, a set of heuristics for CAA are presented. There are four main contributions to knowledge in the thesis: the heuristics; the corpus of usability problems; the Damage Index for prioritising usability problems from multiple evaluations and the evidence based design approach to synthesise heuristics.
The focus of the research evolves with the first objective being to determine If severe usability problems exist that can cause users d?ffIculties and dissatisfaction with unacceptable consequences whitct using existing commercial CAA software applications? Using a survey methodology, students' report a level of satisfaction but due to low inter-group consistency surveys are judged to be ineffective at eliciting usability problems. Alternative methods are analysed and the heuristic evaluation method is judged to be suitable. A study is designed to evaluate Nielsen's heuristic set within the CAA domain and they are deemed to be ineffective based on the formula proposed by Hanson et al. (2003). Domain specific heuristics are therefore necessary and further studies are designed to build a corpus of usability problems to facilitate
the evidence based design approach to synthesise a set of heuristics, in order to aggregate the corpus and prioritise the severity of the problems a Damage Index formula is devised.
The work concludes with a discussion of the heuristic design methodology and potential for future work; this includes the application of the CAA heuristics and applying the heuristic design methodology to other specific domains
Assessing the reliability, validity and acceptance of a classification scheme of usability problems (CUP)
Post-print (lokagerĂ° höfunda)The aim of this study was to evaluate the Classification of Usability Problems (CUP) scheme. The goal of CUP is to classify usability problems further to give user interface developers better feedback to improve their understanding of usability problems, help them manage usability maintenance, enable them to find effective fixes for UP, and prevent such problems from reoccurring in the future. First, reliability was evaluated with raters of different levels of expertise and experience in using CUP. Second, acceptability was assessed with a questionnaire. Third, validity was assessed by developers in two field studies. An analytical comparison was also made to three other classification schemes. CUP reliability results indicated that the expertise and experience of raters are critical factors for assessing reliability consistently, especially for the more complex attributes. Validity analysis results showed that tools used by developers must be tailored to their working framework, knowledge and maturity. The acceptability study showed that practitioners are concerned with the effort spent in applying any tool. To understand developersâ work and the implications of this study two theories are presented for understanding and prioritising UP. For applying classification schemes, the implications of this study are that training and context are needed.Peer Reviewe
Evaluating the usability of the information architecture of academic library websites
PURPOSE : The purpose of this paper is to provide an integrated list of heuristics and an information architecture (IA) framework for the heuristic evaluation of the IA of academic library websites as well as an evaluation framework with practical steps on how to conduct the evaluation.
DESIGN/METHODOLOGY/APPROACH : A set of 14 heuristics resulted from an integration of existing usability principles from authorities in the field of usability. A review of IA literature resulted in a framework for dividing academic library websites into six dialogue elements. The resulting heuristics were made applicable to academic library websites through the addition of recommendations based on a review of 20 related studies.
FINDINGS : This study provides heuristics, a framework and workflow guidelines that can be used by the various evaluators of academic library websites, i.e. library staff, web developers and usability experts, to provide recommendations for improving its usability.
RESEARCH LIMITATIONS/IMPLICATIONS : The focus of the usability principles is the evaluation of the IA aspects of websites and therefore does not provide insights into accessibility or visual design aspects.
ORIGINALITY/VALUE : The main problem that is addressed by this study is that there are no clear guidelines on how to apply existing usability principles for the evaluation of the IA of academic library websites.https://www.emerald.com/insight/publication/issn/0737-8831hj2020Information Scienc
SUPPORTING THERAPY-CENTERED GAME DESIGN FOR BRAIN INJURY REHABILITATION
Brain injuries (BI) are a major public health issue. Many therapists who work with patients who have had a BI include games to ameliorate boredom associated with repetitive rehabilitation. However, designing effective, appropriate, and engaging games for BI therapy is challenging. The challenge is especially manifested when considering how to consolidate the different mindsets and motivations among key stakeholders; i.e., game designers and therapists. In this dissertation, I investigated the ideation, creation, and evaluation of game design patterns and a design tool, GaPBIT (Game Design Patterns for BI Therapy) that leveraged patterns to support ideation of BI therapy game concepts and facilitate communication among designers and therapists. Design patterns, originated from the work of Christopher Alexander, provide a common design language in a specific field by documenting reusable design concepts that have successfully solved recurring problems.
This investigation involved four overlapping phases. In Phase One, I interviewed 11 professional game designers focused on games for health (serious games embedded with health-related goals) to explore how they perceived and approached their work. In Phase Two, I identified 25 therapy-centered game design patterns through analyzing data about game use in BI therapy. Based on those patterns, in Phase Three I created and iterated the GaPBIT prototype through user studies. In Phase Four, I conducted quasi-experimental case studies to establish the efficacy and user experience of GaPBIT in game design workshops that involved both game designers and therapists.
During the design workshops, the design patterns and GaPBIT supported exploration of game design ideas and effectively facilitated discussion among designers and therapists. The results also indicated that these tools were especially beneficial for novice game designers. This work significantly promotes game design for BI rehabilitation by providing designers and therapists with easier access to the information about requirements in rehabilitation games. Additionally, this work modeled a novel research methodology for investigating domains where balancing the role of designers and other stakeholders is particularly important. Through a âpractitioner-centeredâ process, this work also provides an exemplar of investigating technologies that directly address the information needs of professional practitioners
Guidelines for Usable Cybersecurity: Past and Present
Usability is arguably one of the most significant social topics and issues within the field of cybersecurity today. Supported by the need for confidentiality, integrity, availability and other concerns, security features have become standard components of the digital environment which pervade our lives requiring use by novices and experts alike. As security features are exposed to wider cross-sections of the society, it is imperative that these functions are highly usable. This is especially because poor usability in this context typically translates into inadequate application of cybersecurity tools and functionality, thereby ultimately limiting their effectiveness. With this goal of highly usable security in mind, there have been a plethora of studies in the literature focused on identifying security usability problems and proposing guidelines and recommendations to address them. Our paper aims to contribute to the field by consolidating a number of existing design guidelines and defining an initial core list for future reference. Whilst investigating this topic, we take the opportunity to provide an up-to-date review of pertinent cybersecurity usability issues and evaluation techniques applied to date. We expect this research paper to be of use to researchers and practitioners with interest in cybersecurity systems which appreciate the human and social elements of design
- âŠ