195 research outputs found

    An ideal model of an assistive technology assessment and delivery process

    Get PDF
    The purpose of the present work is to present some aspects of the Assistive Technology Assessment (ATA) process model compatible with the Position Paper 2012 by AAATE/EASTIN. Three aspects of the ATA process will be discussed in light of three topics of the Position Paper 2012: (i) The dimensions and the measures of the User eXperience (UX) evaluation modelled in the ATA process as a way to verify the efficient and the evidence-based practices of an AT service delivery centre; (ii) The relevance of the presence of the psychologist in the multidisciplinary team of an AT service delivery centre as necessary for a complete person-centred assistive solution empowering users to make their own choices; (iii) The new profession of the psychotechnologist, who explores users needs by seeking a proper assistive solution, leading the multidisciplinary team to observe critical issues and problems. Through the foundation of the Position Paper 2012, the 1995 HEART study, the Matching Person and Technology model, the ICF framework, and the pillars of the ATA process, this paper sets forth a concept and approach that emphasise the personal factors of the individual consumer and UX as key to positively impacting a successful outcome and AT solution

    Reviewing and extending the five-user assumption: A grounded procedure for interaction evaluation

    Get PDF
    " © ACM, 2013. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Computer-Human Interaction (TOCHI), {VOL 20, ISS 5, (November 2013)} http://doi.acm.org/10.1145/2506210 "The debate concerning how many participants represents a sufficient number for interaction testing is well-established and long-running, with prominent contributions arguing that five users provide a good benchmark when seeking to discover interaction problems. We argue that adoption of five users in this context is often done with little understanding of the basis for, or implications of, the decision. We present an analysis of relevant research to clarify the meaning of the five-user assumption and to examine the way in which the original research that suggested it has been applied. This includes its blind adoption and application in some studies, and complaints about its inadequacies in others. We argue that the five-user assumption is often misunderstood, not only in the field of Human-Computer Interaction, but also in fields such as medical device design, or in business and information applications. The analysis that we present allows us to define a systematic approach for monitoring the sample discovery likelihood, in formative and summative evaluations, and for gathering information in order to make critical decisions during the interaction testing, while respecting the aim of the evaluation and allotted budget. This approach – which we call the ‘Grounded Procedure’ – is introduced and its value argued.The MATCH programme (EPSRC Grants: EP/F063822/1 EP/G012393/1

    Why you need to include human factors in clinical and empirical studies of in vitro point of care devices? Review and future perspectives

    Get PDF
    Use of in-vitro point of care devices - intended as tests performed out of laboratories and near patient - is increasing in clinical environments. International standards indicate that interaction assessment should not end after the product release, yet human factors methods are frequently not included in clinical and empirical studies of these devices. Whilst the literature confirms some advantages of bed-side tests compared to those in laboratories there is a lack of knowledge of the risks associated with their use. This article provides a review of approaches applied by clinical researchers to model the use of in-vitro testing. Results suggest that only a few studies have explored human factor approaches. Furthermore, when researchers investigated people-device interaction these were predominantly limited to qualitative and not standardised approaches. The methodological failings and limitations of these studies, identified by us, demonstrate the growing need to integrate human factors methods in the medical field

    Is the LITE version of the usability metric for user experience (UMUX-LITE) a reliable tool to support rapid assessment of new healthcare technology?

    Get PDF
    Objective To ascertain the reliability of a standardised, short-scale measure of satisfaction in the use of new healthcare technology i.e., the LITE version of the usability metric for user experience (UMUX-LITE). Whilst previous studies have demonstrated the reliability of UMUX-LITE, and its relationship with measures of likelihood to recommend a product, such as the Net Promoter Score (NPS) in other sectors no such testing has been undertaken with healthcare technology. Materials and methods Six point-of-care products at different stages of development were assessed by 120 healthcare professionals. UMUX-LITE was used to gather their satisfaction in use, and NPS to declare their intention to promote the product. Inferential statistics were used to: i) ascertain the reliability of UMUX-LITE, and ii) assess the relationship between UMUX-LITE and NPS at different stages of products development. Results UMUX-LITE showed an acceptable reliability (α = 0.7) and a strong positive correlation with NPS (r = 0.455, p < .001). This is similar to findings in other fields of application. The level of product development did not affect the UMUX-LITE scores, while the stage of development was a significant predictor (R2 = 0.49) of the intention to promote. Discussion and conclusion Practitioners may apply UMUX-LITE alone, or in combination with the NPS, to complement interview and ‘homemade’ scales to investigate the quality of new products at different stages of development. This shortened scale is appropriate for use in the context of healthcare in which busy professionals have a minimal amount of time to support innovation

    A Systematic Literature Review of User Experience Evaluation Scales for Human-Robot Collaboration

    Get PDF
    In the last decade, the field of Human-Robot Collaboration (HRC) has received much attention from both research institutions and industries. Robot technologies are in fact deployed in many different areas (e.g., industrial processes, people assistance) to support an effective collaboration between humans and robots. In this transdisciplinary context, User eXperience (UX) has inevitably to be considered to achieve an effective HRC, namely to allow the robots to better respond to the users’ needs and thus improve the interaction quality. The present paper reviews the evaluation scales used in HRC scenarios, focusing on the application context and evaluated aspects. In particular, a systematic review was conducted based on the following questions: (RQ1) which evaluation scales are adopted within the HRI scenario with collaborative tasks?, and (RQ2) how the UX and user satisfaction are assessed?. The records analysis highlighted that the UX aspects are not sufficiently examined in the current HRC design practice, particularly in the industrial field. This is most likely due to a lack of standardized scales. To respond to this recognized need, a set of dimensions to be considered in a new UX evaluation scale were proposed

    Ciao AI: the Italian adaptation and validation of the Chatbot Usability Scale

    Get PDF
    Chatbot-based tools are becoming pervasive in multiple domains from commercial websites to rehabilitation applications. Only recently, an eleven-item satisfaction inventory was developed (the ChatBot Usability Scale, BUS-11) to help designers in the assessment process of their systems. The BUS-11 has been validated in multiple contexts and languages, i.e., English, German, Dutch, and Spanish. This scale forms a solid platform enabling designers to rapidly assess chatbots both during and after the design process. The present work aims to adapt and validate the BUS-11 inventory in Italian. A total of 1360 questionnaires were collected which related to a total of 10 Italian chatbot-based systems using the BUS-11 inventory and also using the lite version of the Usability Metrics for User eXperience for convergent validity purposes. The Italian version of the BUS-11 was adapted in terms of the wording of one item, and a Multi-Group Confirmatory Factorial Analysis was performed to establish the factorial structure of the scale and compare the effects of the wording adaptation. Results indicate that the adapted Italian version of the scale matches the expected factorial structure of the original scale. The Italian BUS-11 is highly reliable (Cronbach alpha: 0.921), and it correlates to other measures of satisfaction (e.g., UMUX-Lite, τb = 0.67; p &lt; .001) by also offering specific insights regarding the chatbots’ characteristics. The Italian BUS-11 can be confidently used by chatbot designers to assess the satisfaction of their users during formative or summative tests
    • …
    corecore