27 research outputs found

    UI design and cross cultural communication

    Get PDF
    Thesis submitted to the Department of Computer Science, Ashesi University College, in partial fulfillment of Bachelor of Science degree in Computer Science, April 2013This thesis explores young Ghanaian students’ conceptualizations of and reactions to a website. In a study that comprises a design evaluation of an online pen-pal website and usability testing with students, it attempts to determine to what extent such a website can be used to help African-American and African children form better perceptions of each other, and how a design of a website. The approach used first determines what Ghanaian students’ expectations of a pen-pal website are, and comparing them to that of American students, and then having them testing it and giving feedback based on their reactions. Suggestions are then made for future measuring website usability and cross-cultural communication.Ashesi University Colleg

    Social Context in Usability Evaluations: Concepts, Processes and Products

    Get PDF

    Evaluating First Experiences with an Educational Computer Game: A multi-Method Approach

    Get PDF
    This paper presents our evaluation approach for a specific case study, namely the evaluation of an early prototype of an educational game with children aged between 12 and 14 years. The main goal of this initial evaluation study was to explore children’s first impressions and experiences of the game on the one hand and to assess the students’ ideas and wishes for the further development of the game on the other hand. The main challenge for the evaluation activities was the selection of the appropriate methodological approach, taking into account children as a special user group. We opted for a combination of different, mainly qualitative and explorative methods that were reported beneficial for work with children in the human-computer interaction (HCI) field. By presenting our multi-method approach, in particular the different steps and procedure within our study, other researchers can get inspirations for follow up activities when evaluating games with children as well as benefit from our experiences in exploring more collaborative methods and methodological combinations

    Exploring how children interact with 3D shapes using haptic technologies

    Get PDF
    Haptic devices have the potential to enhance the learning experience by foregrounding embodied, sensory and multi-modal elements of learning topics. In this paper, we report on-going work investigating a game prototype with haptic feedback for seven year old children's engagement with geometrical concepts as part of an iterative design study. Our findings include a new game play mode adopted by the children, that empowers the use of haptic feedback in game play and has the potential to enable the enactment of shape properties in the game play process

    Using paper prototyping as a rapid participatory design technique in the design of MLCAT - a lecture podcasting tool

    Get PDF
    Podcasting has permeated the developed world higher education environments. Despite this, there is inadequate research published to explore podcasting in developing Higher Education Institutions. In areas with limited electricity, never mind the internet, how can podcasting succeed? This paper describes Participatory Design activities with university lecturers in sub-Saharan Africa (University of Cape Town and Makerere University) to design a podcasting tool. We postulate that by involving them in the design, we can identify specific requirements and they will accept and use the tool. Academics have heavy workloads and tight schedules and conducting design sessions with busy professionals demands preparation, improvisation, and clarity of purpose. Therefore, this paper presents the use of paper prototyping technique during the two hour Participatory Design sessions with lecturers in the design of a horizontal MLCAT prototype. In addition, we present formative evaluations that reveal insightful results which will be used in the further implementation of the tool

    Practical and ethical concerns in usability testing with children

    Get PDF
    It is common practice to evaluate interactive technology with users. In industry, usability companies typically carry out these evaluations, and the participants in the evaluation are usually adults. In research studies, researchers who do not do this sort of work on a daily basis, typically perform the evaluation. Complexity can be increased if the researcher is also the developer of the software and if the users are children. This case study explores that space, the evaluation of software with researchers / developers with children. The chapter describes the evaluation of an educational game that was designed to teach Spanish to children. The chapter outlines the planning for, and the execution of, a usability study of the game with 25 children aged 7-8 in a school in the UK. The study used two methods to try and discover usability problems; direct observation and retrospective think-aloud, and also gathered user experience data using the Fun Toolkit. The focus in this chapter is less on the results of the evaluation (although these are presented) but more on the practical and ethical concerns of conducting usability evaluations of games with children within a school setting. Those reading the chapter will gather hints and tips from the narrative and will better understand the use of the three methods included in the study. In addition, the researcher / developer role is discussed and it is shown that the methods used here enabled children to make judgments without the ownership of the product being an issue. To make the main points more concrete, the chapter closes with a set of ‘key points’ to consider when doing usability testing with children in schools

    To intervene or not to intervene: An investigation of three think-aloud protocols in usability testing

    Get PDF
    This paper presents the results of a study investigating the use of three think-aloud methods in website usability testing: the concurrent think-aloud, the speech-communication, and the active intervention methods. These three methods were compared through an evaluation of a library website, which involved four points of comparison: overall task performance, test participants’ experiences, the quantity and quality of usability problems discovered, and the cost of employing the methods. Data were collected from 60 individuals, with 20 participants allocated to each testing method, who were asked to complete a set of nine experimental tasks. The results of the study revealed that the three variations enabled the identification of a similar number of usability problems and types. However, the active intervention method was found to cause some reactivity, modifying participants’ interaction with the interface, and negatively affecting their feelings towards the evaluator. The active intervention method also required much greater investment than did the other two methods in terms of evaluators' time

    Are two pairs of eyes better than one? A comparison of concurrent think-aloud and co-participation methods in usability testing

    Get PDF
    This paper presents the results of a study that aimed to compare the traditional concurrent think-aloud protocol with the co-participation method to determine the benefit of adding an additional participant to the testing session. The two methods were compared through an evaluation of a library website, and their relative validity and utility were measured using four points of comparison: overall task performance, test participants’ experiences, quantity and quality of problems discovered, and the cost of employing each method. The results of the study show significant differences between the performances of the two types of testing methods. The co-participation method was evaluated more positively by users and led to the detection of a greater number of minor usability problems. This method, however, was found to require a greater investment of time and effort on the part of the evaluator in comparison to the classical method. This study found no difference between the methods in terms of task performance
    corecore