34,015 research outputs found

    Adaptive development and maintenance of user-centric software systems

    Get PDF
    A software system cannot be developed without considering the various facets of its environment. Stakeholders – including the users that play a central role – have their needs, expectations, and perceptions of a system. Organisational and technical aspects of the environment are constantly changing. The ability to adapt a software system and its requirements to its environment throughout its full lifecycle is of paramount importance in a constantly changing environment. The continuous involvement of users is as important as the constant evaluation of the system and the observation of evolving environments. We present a methodology for adaptive software systems development and maintenance. We draw upon a diverse range of accepted methods including participatory design, software architecture, and evolutionary design. Our focus is on user-centred software systems

    Methodological development

    Get PDF
    Book description: Human-Computer Interaction draws on the fields of computer science, psychology, cognitive science, and organisational and social sciences in order to understand how people use and experience interactive technology. Until now, researchers have been forced to return to the individual subjects to learn about research methods and how to adapt them to the particular challenges of HCI. This is the first book to provide a single resource through which a range of commonly used research methods in HCI are introduced. Chapters are authored by internationally leading HCI researchers who use examples from their own work to illustrate how the methods apply in an HCI context. Each chapter also contains key references to help researchers find out more about each method as it has been used in HCI. Topics covered include experimental design, use of eyetracking, qualitative research methods, cognitive modelling, how to develop new methodologies and writing up your research

    Understanding the fidelity effect when evaluating games with children

    Get PDF
    There have been a number of studies that have compared evaluation results from prototypes of different fidelities but very few of these are with children. This paper reports a comparative study of three prototypes ranging from low fidelity to high fidelity within the context of mobile games, using a between subject design with 37 participants aged 7 to 9. The children played a matching game on either an iPad, a paper prototype using screen shots of the actual game or a sketched version. Observational data was captured to establish the usability problems, and two tools from the Fun Toolkit were used to measure user experience. The results showed that there was little difference for user experience between the three prototypes and very few usability problems were unique to a specific prototype. The contribution of this paper is that children using low-fidelity prototypes can effectively evaluate games of this genre and style

    Evaluating system utility and conceptual fit using CASSM

    Get PDF
    There is a wealth of user-centred evaluation methods (UEMs) to support the analyst in assessing interactive systems. Many of these support detailed aspects of use – for example: Is the feedback helpful? Are labels appropriate? Is the task structure optimal? Few UEMs encourage the analyst to step back and consider how well a system supports users’ conceptual understandings and system utility. In this paper, we present CASSM, a method which focuses on the quality of ‘fit’ between users and an interactive system. We describe the methodology of conducting a CASSM analysis and illustrate the approach with three contrasting worked examples (a robotic arm, a digital library system and a drawing tool) that demonstrate different depths of analysis. We show how CASSM can help identify re-design possibilities to improve system utility. CASSM complements established evaluation methods by focusing on conceptual structures rather than procedures. Prototype tool support for completing a CASSM analysis is provided by Cassata, an open source development

    Toward a document evaluation methodology: What does research tell us about the validity and reliability of evaluation methods?

    Get PDF
    Although the usefulness of evaluating documents has become generally accepted among communication professionals, the supporting research that puts evaluation practices empirically to the test is only beginning to emerge. This article presents an overview of the available research on troubleshooting evaluation methods. Four lines of research are distinguished concerning the validity of evaluation methods, sample composition, sample size, and the implementation of evaluation results during revisio

    Systematic evaluation of design choices for software development tools

    Get PDF
    [Abstract]: Most design and evaluation of software tools is based on the intuition and experience of the designers. Software tool designers consider themselves typical users of the tools that they build and tend to subjectively evaluate their products rather than objectively evaluate them using established usability methods. This subjective approach is inadequate if the quality of software tools is to improve and the use of more systematic methods is advocated. This paper summarises a sequence of studies that show how user interface design choices for software development tools can be evaluated using established usability engineering techniques. The techniques used included guideline review, predictive modelling and experimental studies with users

    Usability discussions in open source development

    Get PDF
    The public nature of discussion in open source projects provides a valuable resource for understanding the mechanisms of open source software development. In this paper we explore how open source projects address issues of usability. We examine bug reports of several projects to characterise how developers address and resolve issues concerning user interfaces and interaction design. We discuss how bug reporting and discussion systems can be improved to better support bug reporters and open source developers

    Use of scenario evaluation in preparation for deployment of a collaborative system for knowledge transfer - the case of KiMERA

    Get PDF
    This paper presented an approach for the evaluation of a collaborative system, after the completion of system development and software testing but before its deployment. Scenario and collaborative episodes were designed and data collected from users role-playing. This was found to be a useful step in refining the user training, in setting the right level of user expectation when the system started to roll-out to real users and in providing feedback to the development team
    corecore