4,851 research outputs found

    Supporting user-perceived usability benchmarking through a developed quantitative metric

    Full text link
    Most user-centered assessment activities for ensuring usability are principally focused on performing formative evaluations, enrolling users to complete different tasks and thus obtaining indicators such as effectiveness and efficiency. However, when considering broader scenarios, such as in User Experience (UX) assessments, user perceived satisfaction (or perceived usability) is even much more relevant. There are different methods for measuring user perception, however most of them are mainly qualitative and based on individual assessments, providing little specific support to carry out comparisons–i.e., benchmarking on user-perceived usability. In this paper, we propose a quantitative metric to achieve comparative evaluations of usability perception based on Reaction Cards, a popular method for obtaining the user's subjective satisfaction in UX assessments. The metric was developed through an empirical study. Additionally, it has been validated with usability experts. Besides, we provide a supporting tool based on the developed metric, featuring a framework to store historical evaluations in order to obtain charts and benchmark levels for comparing perceived usability from different artifacts such as software products, applications categories, services, mockups, prototypes and so on. Furthermore, an evaluation involving usability professionals was achieved, providing satisfactory results to answer research questions, thus demonstrating the suitability of the approach proposedThis work was partially supported by the Spanish Government[grant number TIN2014-52129-R]; and the Madrid Research Council [grant number S2013/ICE-2715

    Principles in Patterns (PiP) : Project Evaluation Synthesis

    Get PDF
    Evaluation activity found the technology-supported approach to curriculum design and approval developed by PiP to demonstrate high levels of user acceptance, promote improvements to the quality of curriculum designs, render more transparent and efficient aspects of the curriculum approval and quality monitoring process, demonstrate process efficacy and resolve a number of chronic information management difficulties which pervaded the previous state. The creation of a central repository of curriculum designs as the basis for their management as "knowledge assets", thus facilitating re-use and sharing of designs and exposure of tacit curriculum design practice, was also found to be highly advantageous. However, further process improvements remain possible and evidence of system resistance was found in some stakeholder groups. Recommendations arising from the findings and conclusions include the need to improve data collection surrounding the curriculum approval process so that the process and human impact of C-CAP can be monitored and observed. Strategies for improving C-CAP acceptance among the "late majority", the need for C-CAP best practice guidance, and suggested protocols on the knowledge management of curriculum designs are proposed. Opportunities for further process improvements in institutional curriculum approval, including a re-engineering of post-faculty approval processes, are also recommended

    Essential guidelines for computational method benchmarking

    Get PDF
    In computational biology and other sciences, researchers are frequently faced with a choice between several computational methods for performing data analyses. Benchmarking studies aim to rigorously compare the performance of different methods using well-characterized benchmark datasets, to determine the strengths of each method or to provide recommendations regarding suitable choices of methods for an analysis. However, benchmarking studies must be carefully designed and implemented to provide accurate, unbiased, and informative results. Here, we summarize key practical guidelines and recommendations for performing high-quality benchmarking analyses, based on our experiences in computational biology

    Improving infinIT Usability at EMC Corporation

    Get PDF
    The purpose of this MQP was to assist the Client Experience Team at EMC Corporation in redesigning the user experience of a globally used internal portal at EMC. The MQP team conducted two benchmarking and two formative studies to assess the portal’s current state and identify improvement opportunities. Using both quantitative and qualitative methods such as eye-tracking, surveys, and interviews, the results of the bench marking studies were used to propose recommendations for improving the portal’s content, layout, and visual appeal. The results of the formative studies confirmed the effectiveness of the recommendations and provided insight for the next step in the process. This MQP served as an integral part of the User Centric Software Development Life Cycle at EMC IT

    Essential guidelines for computational method benchmarking

    Get PDF
    In computational biology and other sciences, researchers are frequently faced with a choice between several computational methods for performing data analyses. Benchmarking studies aim to rigorously compare the performance of different methods using well-characterized benchmark datasets, to determine the strengths of each method or to provide recommendations regarding suitable choices of methods for an analysis. However, benchmarking studies must be carefully designed and implemented to provide accurate, unbiased, and informative results. Here, we summarize key practical guidelines and recommendations for performing high-quality benchmarking analyses, based on our experiences in computational biology.Comment: Minor update

    FEeSU - A Framework for Evaluating eHealth Systems Usability: A Case of Tanzania Health Facilities

    Get PDF
    Adopting eHealth systems in the health sector has changed the means of providing health services and increased the quality of service in many countries. The usability of these systems needs to be evaluated from time to time to reduce or completely avoid the possibility of jeopardizing the patients’ data, medication errors, etc. However, the existing frameworks are not country context sensitive since they are designed with the mindset of practices in developed countries. Such developed countries’ contexts have different cultures, resource settings, and levels of computer literacy compared to developing countries such as Tanzania. This paper presents the framework for evaluating eHealth system usability (FEeSU) that is designed with a focus on developing country contexts and tested in Tanzania. Healthcare professionals, including doctors, nurses, laboratory technologists, and pharmacists, were the main participants in this research to acquire practice-oriented requirements based on their experience, best practices, and healthcare norms. The framework comprises six steps to be followed in the evaluation process. These steps are associated with important components, including usability metrics, stakeholders, usability evaluation methods, and contextual issues necessary for usability evaluation. The proposed usability evaluation framework could be used as guidelines by different e-health system stakeholders when preparing, designing, and performing the evaluation of the usability of a system. Keywords: Usability metrics, Usability evaluation Contextual issues eHealth systems Framework for usability evaluation FEeSU. DOI: 10.7176/CEIS/10-1-01 Publication date:September 30th 202

    Evaluating the usability and usefulness of a digital library

    Get PDF
    Purpose - System usability and system usefulness are interdependent properties of system interaction, which in combination, determine system satisfaction and usage. Often approached separately, or in the case of digital libraries, often focused upon usability, there is emerging consensus among the research community for their unified treatment and research attention. However, a key challenge is to identify, both respectively and relatively, what to measure and how, compounded by concerns regarding common understanding of usability measures, and associated calls for more valid and complete measures within integrated and comprehensive models. The purpose of this paper is to address this challenge. Design/methodology/approach - Identified key usability and usefulness attributes and associated measures, compiled an integrated measurement framework, identified a suitable methodological approach for application of the framework, and conducted a pilot study on an interactive search system developed by a Health Service as part of their e-library service. Findings - Effectiveness, efficiency, aesthetic appearance, terminology, navigation, and learnability are key attributes of system usability; and relevance, reliability, and currency key attributes of system usefulness. There are shared aspects to several of these attributes, but each is also sufficiently unique to preserve its respective validity. They can be combined as part of a multi-method approach to system evaluation. Research limitations/implications - Pilot study has demonstrated that usability and usefulness can be readily combined, and that questionnaire and observation are valid multi-method approaches, but further research is called for under a variety of conditions, with further combinations of methods, and larger samples. Originality/value - This paper provides an integrated measurement framework, derived from the goal, question, metric paradigm, which provides a relatively comprehensive and representative set of system usability and system usefulness attributes and associated measures, which could be adapted and further refined on a case-by-case basis

    Analysis and measurement of internal usability metrics through code annotations

    Full text link
    Nowadays, usability can be meant as an important quality characteristic to be considered throughout the software development process. A great variety of usability techniques have been proposed so far, mostly intended to be applied during analysis, design and final testing phases in software projects. However, little or no attention has been paid to the analysis and measurement of usability in the implementation phase. Most of the time, usability testing is traditionally executed in advanced stages. However, the detection of usability flaws during the implementation is of utmost importance to foresee and prevent problems in the utilization of the software and avoid significant cost increases. In this paper, we propose a feasible solution to analyze and measure usability metrics during the implementation phase. Specifically, we have developed a framework featuring code annotations that provides a systematic evaluation of the usability throughout the source code. These annotations are interpreted by an annotation processor to obtain valuable information and automatically calculate usability metrics at compile time. In addition, an evaluation with 32 participants has been carried out to demonstrate the effectiveness and efficiency of our approach in comparison to the manual process of analyzing and measuring internal usability metrics. Perceived satisfaction was also evaluated, demonstrating that our approach can be considered as a valuable tool for dealing with usability metrics during the implementation phaseThis work was partially supported by the Madrid Research Council (P2018/TCS-4314

    Rethinking Productivity in Software Engineering

    Get PDF
    Get the most out of this foundational reference and improve the productivity of your software teams. This open access book collects the wisdom of the 2017 "Dagstuhl" seminar on productivity in software engineering, a meeting of community leaders, who came together with the goal of rethinking traditional definitions and measures of productivity. The results of their work, Rethinking Productivity in Software Engineering, includes chapters covering definitions and core concepts related to productivity, guidelines for measuring productivity in specific contexts, best practices and pitfalls, and theories and open questions on productivity. You'll benefit from the many short chapters, each offering a focused discussion on one aspect of productivity in software engineering. Readers in many fields and industries will benefit from their collected work. Developers wanting to improve their personal productivity, will learn effective strategies for overcoming common issues that interfere with progress. Organizations thinking about building internal programs for measuring productivity of programmers and teams will learn best practices from industry and researchers in measuring productivity. And researchers can leverage the conceptual frameworks and rich body of literature in the book to effectively pursue new research directions. What You'll Learn Review the definitions and dimensions of software productivity See how time management is having the opposite of the intended effect Develop valuable dashboards Understand the impact of sensors on productivity Avoid software development waste Work with human-centered methods to measure productivity Look at the intersection of neuroscience and productivity Manage interruptions and context-switching Who Book Is For Industry developers and those responsible for seminar-style courses that include a segment on software developer productivity. Chapters are written for a generalist audience, without excessive use of technical terminology. ; Collects the wisdom of software engineering thought leaders in a form digestible for any developer Shares hard-won best practices and pitfalls to avoid An up to date look at current practices in software engineering productivit
    corecore