139,925 research outputs found

    Managing usability evaluation practices in agile development environments

    Get PDF
    Usability evaluation is a core usability activity that minimizes risks and improves product quality. The returns from usability evaluation are undeniable. Neglecting such evaluation at the development stage negatively affects software usability. In this paper, the authors develop a software management tool used to incorporate usability evaluation activities into the agile environment. Using this tool, agile development teams can manage a continuous evaluation process, tightly coupled with the development process, allowing them to develop high quality software products with adequate level of usability. The tool was evaluated through verification, followed by the validation on satisfaction. The evaluation results show that the tool increased software development practitioner satisfaction and is practical for supporting usability work in software projects.

    Systematic evaluation of design choices for software development tools

    Get PDF
    [Abstract]: Most design and evaluation of software tools is based on the intuition and experience of the designers. Software tool designers consider themselves typical users of the tools that they build and tend to subjectively evaluate their products rather than objectively evaluate them using established usability methods. This subjective approach is inadequate if the quality of software tools is to improve and the use of more systematic methods is advocated. This paper summarises a sequence of studies that show how user interface design choices for software development tools can be evaluated using established usability engineering techniques. The techniques used included guideline review, predictive modelling and experimental studies with users

    A Guideline Tool for Ongoing Product Evaluation in Small and Medium-Sized Enterprises

    Get PDF
    As consumer demand for user friendly software increases, usability evaluation is crucial to develop software systems which are easy to learn and use. However, implementation of usability evaluation is challenging for small and medium-sized enterprises (SMEs) due to factors such as lack of technical expertise, knowledge and experience of methods and standards. This results in neglect, or poorly executed evaluations of projects, resulting in software that disappoints and frustrates clients. To overcome this loss of competitiveness, we propose here a visual incorporation tool derived from ISO standards that would assist software development teams in SMEs in understanding and implementing usability evaluations. It shows fundamental Usability Engineering (UE) and Software Engineering (SE) activities and artifacts relevant to the usability evaluation and software development process, with potential incorporation points being highlighted. Dependencies and relationships are shown by links between activities and artifacts. Additionally, convergent artifacts of both disciplines were identified and shown. Evaluation of the proposed tool was based on the questionnaire results of software development practitioners from SMEs

    The AutoProof Verifier: Usability by Non-Experts and on Standard Code

    Get PDF
    Formal verification tools are often developed by experts for experts; as a result, their usability by programmers with little formal methods experience may be severely limited. In this paper, we discuss this general phenomenon with reference to AutoProof: a tool that can verify the full functional correctness of object-oriented software. In particular, we present our experiences of using AutoProof in two contrasting contexts representative of non-expert usage. First, we discuss its usability by students in a graduate course on software verification, who were tasked with verifying implementations of various sorting algorithms. Second, we evaluate its usability in verifying code developed for programming assignments of an undergraduate course. The first scenario represents usability by serious non-experts; the second represents usability on "standard code", developed without full functional verification in mind. We report our experiences and lessons learnt, from which we derive some general suggestions for furthering the development of verification tools with respect to improving their usability.Comment: In Proceedings F-IDE 2015, arXiv:1508.0338

    A comparative analysis of web-based GIS applications using usability metrics

    Get PDF
    With the rapid expansion of the internet, Web-based Geographic Information System (WGIS) applications have gained popularity, despite the interface of the WGIS application being difficult to learn and understand because special functions are needed to manipulate the maps. Hence, it is essential to evaluate the usability of WGIS applications. Usability is an important factor in ensuring the development of quality, usable software products. On the other hand, there are a number of standards and models in the literature, each of which describes usability in terms of various set of attributes. These models are vague and difficult to understand. Therefore, the primary purpose of this study is to compare five common usability models (Shackel, Nielsen, ISO 9241 P-11, ISO 9126-1 and QUIM) to identify usability metrics that have most frequently used in the previous models. The questionnaire method and the automated usability evaluation method by using Loop11 tool were used, in order to evaluate the usability metrics for three case studies of commonly used WGIS applications as Google maps, Yahoo maps, and MapQuest. Finally, those case studies were compared and analysed based on usability metrics that have been identified. Based on a comparative study, four usability metrics (Effectiveness, Efficiency, Satisfaction and Learnability) were identified. Those usability metrics were characterized by consistent, comprehensive, not vaguely and proper to evaluate the usability of WGIS applications. In addition, there was a positive correlation between these usability metrics. The comparative analysis indicates that Effectiveness, Satisfaction and Learnability were higher, and the Efficiency was lesser by using the Loop11 tool compared to questionnaire method for the three case studies. In addition, Yahoo Maps and MapQuest have usability metrics rate lesser than Google Maps by applying two methods. Therefore, Google Maps is more usable compared to Yahoo Maps and MapQuest

    A guidance and evaluation approach for mHealth education applications

    Get PDF
    © Springer International Publishing AG 2017. A growing number of mobile applications for health education are being utilized to support different stakeholders, from health professionals to software developers to patients and more general users. There is a lack of a critical evaluation framework to ensure the usability and reliability of these mobile health education applications (MHEAs). Such a framework would facilitate the saving of time and effort for the different user groups. This paper describes a framework for evaluating mobile applications for health education, including a guidance tool to help different stakeholders select the one most suitable for them. The framework is intended to meet the needs and requirements of the different user categories, as well as improving the development of MHEAs through software engineering approaches. A description of the evaluation framework is provided, with its efficient hybrid of selected heuristic evaluation (HE) and usability evaluation (UE) factors. Lastly, an account of the quantitative and qualitative results for the framework applied to the Medscape and other mobile apps is given. This proposed framework - an Evaluation Framework for Mobile Health Education Apps - consists of a hybrid of five metrics selected from a larger set during heuristic and usability evaluation, the choice being based on interviews with patients, software developers and health professionals

    A hybrid evaluation approach and guidance for mHealth education applications

    Get PDF
    © Springer International Publishing AG 2018. Mobile health education applications (MHEAs) are used to support different users. However, although these applications are increasing in number, there is no effective evaluation framework to measure their usability and thus save effort and time for their many user groups. This paper outlines a useful framework for evaluating MHEAs, together with particular evaluation metrics: an efficient hybrid of selected heuristic evaluation (HE) and usability evaluation (UE) factors to enable the determination of the usefulness and usability of MHEAs. We also propose a guidance tool to help stakeholders choose the most suitable MHEA. The outcome of this framework is envisioned as meeting the requirements of different users, in addition to enhancing the development of MHEAs using software engineering approaches by creating new and more effective evaluation techniques. Finally, we present qualitative and quantitative results for the framework when used with MHEAs

    A method for mapping XML-based specifications between development methodologies

    Get PDF
    The Unified Modeling Language (UML) is widely used by software engineers as the basis of analysis and design in software development. However, UML ignores human factors in the course of software development because of its strong emphasis on the internal structure and functionality of the application. This thesis presents a method of mapping human-computer interaction (HCI) requirement specifications generated by usability engineering (UE) methodologies (e.g. Putting Usability First (PUF)) into UML specifications. These two sets of requirement specification are specified, using Extensible Markup Language (XML) so that HCI requirement specifications can be integrated into UML ones. A Mapping Tool was developed to facilitate the creation of mappings between PUF XML tags and XMI tags. The Mapping Tool was used to create mappings between PUF and UML requirement specifications. This mapping process and its outputs were evaluated to demonstrate that the tool worked. The results of the evaluation show that the HCI requirement specification represented by the PUF XML tags can improve the UML specification by adding them into the XMI tags
    • …
    corecore