57,436 research outputs found

    Adaptive development and maintenance of user-centric software systems

    Get PDF
    A software system cannot be developed without considering the various facets of its environment. Stakeholders – including the users that play a central role – have their needs, expectations, and perceptions of a system. Organisational and technical aspects of the environment are constantly changing. The ability to adapt a software system and its requirements to its environment throughout its full lifecycle is of paramount importance in a constantly changing environment. The continuous involvement of users is as important as the constant evaluation of the system and the observation of evolving environments. We present a methodology for adaptive software systems development and maintenance. We draw upon a diverse range of accepted methods including participatory design, software architecture, and evolutionary design. Our focus is on user-centred software systems

    Towards a kansei-based user modeling methodology for eco-design

    Get PDF
    We propose here to highlight the benefits of building a framework linking Kansei Design (KD), User Centered Design (UCD) and Eco-design, as the correlation between these fields is barely explored in research at the current time. Therefore, we believe Kansei Design could serve the goal of achieving more sustainable products by setting up an accurate understanding of the user in terms of ecological awareness, and consequently enhancing performance in the Eco-design process. In the same way, we will consider the means-end chain approach inspired from marketing research, as it is useful for identifying ecological values, mapping associated functions and defining suitable design solutions. Information gathered will serve as entry data for conducting scenario-based design, and supporting the development of an Eco-friendly User Centered Design methodology (EcoUCD).ANR-ECOUS

    Methodological development

    Get PDF
    Book description: Human-Computer Interaction draws on the fields of computer science, psychology, cognitive science, and organisational and social sciences in order to understand how people use and experience interactive technology. Until now, researchers have been forced to return to the individual subjects to learn about research methods and how to adapt them to the particular challenges of HCI. This is the first book to provide a single resource through which a range of commonly used research methods in HCI are introduced. Chapters are authored by internationally leading HCI researchers who use examples from their own work to illustrate how the methods apply in an HCI context. Each chapter also contains key references to help researchers find out more about each method as it has been used in HCI. Topics covered include experimental design, use of eyetracking, qualitative research methods, cognitive modelling, how to develop new methodologies and writing up your research

    Evaluating system utility and conceptual fit using CASSM

    Get PDF
    There is a wealth of user-centred evaluation methods (UEMs) to support the analyst in assessing interactive systems. Many of these support detailed aspects of use – for example: Is the feedback helpful? Are labels appropriate? Is the task structure optimal? Few UEMs encourage the analyst to step back and consider how well a system supports users’ conceptual understandings and system utility. In this paper, we present CASSM, a method which focuses on the quality of ‘fit’ between users and an interactive system. We describe the methodology of conducting a CASSM analysis and illustrate the approach with three contrasting worked examples (a robotic arm, a digital library system and a drawing tool) that demonstrate different depths of analysis. We show how CASSM can help identify re-design possibilities to improve system utility. CASSM complements established evaluation methods by focusing on conceptual structures rather than procedures. Prototype tool support for completing a CASSM analysis is provided by Cassata, an open source development

    Usability evaluation of digital libraries: a tutorial

    Get PDF
    This one-day tutorial is an introduction to usability evaluation for Digital Libraries. In particular, we will introduce Claims Analysis. This approach focuses on the designers’ motivations and reasons for making particular design decisions and examines the effect on the user’s interaction with the system. The general approach, as presented by Carroll and Rosson(1992), has been tailored specifically to the design of digital libraries. Digital libraries are notoriously difficult to design well in terms of their eventual usability. In this tutorial, we will present an overview of usability issues and techniques for digital libraries, and a more detailed account of claims analysis, including two supporting techniques – simple cognitive analysis based on Norman’s ‘action cycle’ and Scenarios and personas. Through a graduated series of worked examples, participants will get hands-on experience of applying this approach to developing more usable digital libraries. This tutorial assumes no prior knowledge of usability evaluation, and is aimed at all those involved in the development and deployment of digital libraries

    Scoping analytical usability evaluation methods: A case study

    Get PDF
    Analytical usability evaluation methods (UEMs) can complement empirical evaluation of systems: for example, they can often be used earlier in design and can provide accounts of why users might experience difficulties, as well as what those difficulties are. However, their properties and value are only partially understood. One way to improve our understanding is by detailed comparisons using a single interface or system as a target for evaluation, but we need to look deeper than simple problem counts: we need to consider what kinds of accounts each UEM offers, and why. Here, we report on a detailed comparison of eight analytical UEMs. These eight methods were applied to it robotic arm interface, and the findings were systematically compared against video data of the arm ill use. The usability issues that were identified could be grouped into five categories: system design, user misconceptions, conceptual fit between user and system, physical issues, and contextual ones. Other possible categories such as User experience did not emerge in this particular study. With the exception of Heuristic Evaluation, which supported a range of insights, each analytical method was found to focus attention on just one or two categories of issues. Two of the three "home-grown" methods (Evaluating Multimodal Usability and Concept-based Analysis of Surface and Structural Misfits) were found to occupy particular niches in the space, whereas the third (Programmable User Modeling) did not. This approach has identified commonalities and contrasts between methods and provided accounts of why a particular method yielded the insights it did. Rather than considering measures such as problem count or thoroughness, this approach has yielded insights into the scope of each method
    • …
    corecore