35 research outputs found

    Measuring the Quality of the Website User Experience

    Get PDF
    Consumers spend an increasing amount of time and money online finding information, completing tasks, or making purchases. The quality of the website experience has become a key differentiator for organizations--affecting whether they purchase and their likelihood to return and recommend a website to friends. Two instruments were created to more effectively measure the quality of the website user experience to help improve the experience. Three studies used Classical Test Theory (CTT) to create a new instrument to measure the quality of the website user experience from the website visitor\u27s perspective. Data were collected over five years from more than 4,000 respondents reflecting on experiences with more than 100 websites. An eight-item questionnaire of website quality was created - the Standardized User Experience Percentile Rank Questionnaire (SUPR-Q). The SUPR-Q contains four factors: usability, trust, appearance, and loyalty. The factor structure was replicated across three studies, with data collected both during usability tests and retrospectively in surveys. There was evidence of convergent validity with existing questionnaires, including the System Usability Scale (SUS). An initial distribution of scores across the websites generated a database used to produce percentile ranks and make scores more meaningful to researchers and practitioners. In Study 4, a new set of data and confirmatory factor analysis (CFA) confirmed the factor structure and generated alternative items that work on non-e-commerce websites. The SUPR-Q can be used to generate reliable scores in benchmarking websites, and the normed scores can be used to understand how well a website scores relative to others in the database. A fifth study was designed to develop and evaluate guidelines regarding the quality of the user experience that could be judged by experts. Study 5 establishes a Calibrated Evaluator\u27s Guide (CEG) for evaluators to review websites against a set of guidelines to predict perceptions of quality of website user experience. The CEG was refined from 105 to 37 items using the many-faceted Rasch model. The CEG was found to complement the SUPR-Q by providing a more detailed description of the website user experience. Suggestions for practical use and future research are discussed

    Why Do Developers Get Password Storage Wrong? A Qualitative Usability Study

    Full text link
    Passwords are still a mainstay of various security systems, as well as the cause of many usability issues. For end-users, many of these issues have been studied extensively, highlighting problems and informing design decisions for better policies and motivating research into alternatives. However, end-users are not the only ones who have usability problems with passwords! Developers who are tasked with writing the code by which passwords are stored must do so securely. Yet history has shown that this complex task often fails due to human error with catastrophic results. While an end-user who selects a bad password can have dire consequences, the consequences of a developer who forgets to hash and salt a password database can lead to far larger problems. In this paper we present a first qualitative usability study with 20 computer science students to discover how developers deal with password storage and to inform research into aiding developers in the creation of secure password systems

    Customer analytics for dummies

    No full text
    x, 324 p. ; 25 cm

    A method to standardize usability metrics into a single score

    No full text
    Current methods to represent system or task usability in a single metric do not include all the ANSI and ISO defined usability aspects: effectiveness, efficiency & satisfaction. We propose a method to simplify all the ANSI and ISO aspects of usability into a single, standardized and summated usability metric (SUM). In four data sets, totaling 1860 task observations, we show that these aspects of usability are correlated and equally weighted and present a quantitative model for usability. Using standardization techniques from Six Sigma, we propose a scalable process for standardizing disparate usability metrics and show how Principal Components Analysis can be used to establish appropriate weighting for a summated model. SUM provides one continuous variable for summative usability evaluations that can be used in regression analysis, hypothesis testing and usability reporting. ACM Classificatio

    The Relationship Between Problem Frequency and Problem Severity in Usability Evaluations

    No full text
    Abstract The relationship between problem frequency and severity has been the subject of an ongoing discussion in the usability literature. There is conflicting evidence as to whether more severe problems affect more users or whether problem severity and frequency are independent, especially in the cases where problem severity is based on the judgment of the evaluator. In this paper, multiple evaluators rated the severity of usability problems across nine usability studies independently using their judgment, as opposed to data driven assessments. The average correlation across all nine studies was not significantly different than zero. Only one study showed a positive correlation between problem frequency and severity. This analysis suggests that researchers should treat problem severity and problem frequency as independent factors

    Premium usability

    No full text

    Quantifying the user experience : Practical statistics for user research

    No full text
    Amsterdamxv, 295 p.; 25 c
    corecore