37 research outputs found

    When simulated environments make the difference: the effectiveness of different types of training of car service procedures

    Get PDF
    An empirical analysis was performed to compare the effectiveness of different approaches to training a set of procedural skills to a sample of novice trainees. Sixty-five participants were randomly assigned to one of the following three training groups: (1) learning-by-doing in a 3D desktop virtual environment, (2) learning-by-observing a video (show-and-tell) explanation of the procedures, and (3) trial-and-error. In each group, participants were trained on two car service procedures. Participants were recalled to perform a procedure either 2 or 4 weeks after the training. The results showed that: (1) participants trained through the virtual approach of learning-by-doing performed both procedures significantly better (i.e. p < .05 in terms of errors and time) than people of non-virtual groups, (2) the virtual training group, after a period of non-use, were more effective than non-virtual training (i.e. p < .05) in their ability to recover their skills, (3) after a (simulated) long period from the training—i.e. up to 12 weeks—people who experienced 3D environments consistently performed better than people who received other kinds of training. The results also suggested that independently from the training group, trainees’ visuospatial abilities were a predictor of performance, at least for the complex service procedure, adj R2 = .460, and that post-training performances of people trained through virtual learning-by-doing are not affected by learning styles. Finally, a strong relationship (p < .001, R2 = .441) was identified between usability and trust in the use of the virtual training tool—i.e. the more the system was perceived as usable, the more it was perceived as trustable to acquire the competences

    Informing evaluation of a smartphone application for people with acquired brain injury: a stakeholder engagement study

    Get PDF
    Background Brain in Hand is a smartphone application (app) that allows users to create structured diaries with problems and solutions, attach reminders, record task completion and has a symptom monitoring system. Brain in Hand was designed to support people with psychological problems, and encourage behaviour monitoring and change. The aim of this paper is to describe the process of exploring the barriers and enablers for the uptake and use of Brain in Hand in clinical practice, identify potential adaptations of the app for use with people with acquired brain injury (ABI), and determine whether the behaviour change wheel can be used as a model for engagement. Methods We identified stakeholders: ABI survivors and carers, National Health Service and private healthcare professionals, and engaged with them via focus groups, conference presentations, small group discussions, and through questionnaires. The results were evaluated using the behaviour change wheel and descriptive statistics of questionnaire responses. Results We engaged with 20 ABI survivors, 5 carers, 25 professionals, 41 questionnaires were completed by stakeholders. Comments made during group discussions were supported by questionnaire results. Enablers included smartphone competency (capability), personalisation of app (opportunity), and identifying perceived need (motivation). Barriers included a physical and cognitive inability to use smartphone (capability), potential cost and reliability of technology (opportunity), and no desire to use technology or change from existing strategies (motivation). The stakeholders identified potential uses and changes to the app, which were not easily mapped onto the behaviour change wheel, e.g. monitoring fatigue levels, method of logging task completion, and editing the diary on their smartphone. Conclusions The study identified that both ABI survivors and therapists could see a use for Brain in Hand, but wanted users to be able to personalise it themselves to address individual user needs, e.g. monitoring activity levels. The behaviour change wheel is a useful tool when designing and evaluating engagement activities as it addresses most aspects of implementation, however additional categories may be needed to explore the specific features of assistive technology interventions, e.g. technical functions

    ICT-based system to predict and prevent falls (iStoppFalls): results from an international multicenter randomized controlled trial

    Get PDF
    Background: Falls and fall-related injuries are a serious public health issue. Exercise programs can effectively reduce fall risk in older people. The iStoppFalls project developed an Information and Communication Technology-based system to deliver an unsupervised exercise program in older people’s homes. The primary aims of the iStoppFalls randomized controlled trial were to assess the feasibility (exercise adherence, acceptability and safety) of the intervention program and its effectiveness on common fall risk factors. Methods: A total of 153 community-dwelling people aged 65+ years took part in this international, multicentre, randomized controlled trial. Intervention group participants conducted the exercise program for 16 weeks, with a recommended duration of 120 min/week for balance exergames and 60 min/week for strength exercises. All intervention and control participants received educational material including advice on a healthy lifestyle and fall prevention. Assessments included physical and cognitive tests, and questionnaires for health, fear of falling, number of falls, quality of life and psychosocial outcomes. Results: The median total exercise duration was 11.7 h (IQR = 22.0) over the 16-week intervention period. There were no adverse events. Physiological fall risk (Physiological Profile Assessment, PPA) reduced significantly more in the intervention group compared to the control group (F1,127 = 4.54, p = 0.035). There was a significant three-way interaction for fall risk assessed by the PPA between the high-adherence (>90 min/week; n = 18, 25.4 %), low-adherence (n = 53, 74.6 %) and control group (F2,125 = 3.12, n = 75, p = 0.044). Post hoc analysis revealed a significantly larger effect in favour of the high-adherence group compared to the control group for fall risk (p = 0.031), postural sway (p = 0.046), stepping reaction time (p = 0.041), executive functioning (p = 0.044), and quality of life (p for trend = 0.052). Conclusions: The iStoppFalls exercise program reduced physiological fall risk in the study sample. Additional subgroup analyses revealed that intervention participants with better adherence also improved in postural sway, stepping reaction, and executive function

    Erratum to: Methods for evaluating medical tests and biomarkers

    Get PDF
    [This corrects the article DOI: 10.1186/s41512-016-0001-y.]

    Erratum to: Methods for evaluating medical tests and biomarkers

    Get PDF
    [This corrects the article DOI: 10.1186/s41512-016-0001-y.]

    Evidence synthesis to inform model-based cost-effectiveness evaluations of diagnostic tests: a methodological systematic review of health technology assessments

    Get PDF
    Background: Evaluations of diagnostic tests are challenging because of the indirect nature of their impact on patient outcomes. Model-based health economic evaluations of tests allow different types of evidence from various sources to be incorporated and enable cost-effectiveness estimates to be made beyond the duration of available study data. To parameterize a health-economic model fully, all the ways a test impacts on patient health must be quantified, including but not limited to diagnostic test accuracy. Methods: We assessed all UK NIHR HTA reports published May 2009-July 2015. Reports were included if they evaluated a diagnostic test, included a model-based health economic evaluation and included a systematic review and meta-analysis of test accuracy. From each eligible report we extracted information on the following topics: 1) what evidence aside from test accuracy was searched for and synthesised, 2) which methods were used to synthesise test accuracy evidence and how did the results inform the economic model, 3) how/whether threshold effects were explored, 4) how the potential dependency between multiple tests in a pathway was accounted for, and 5) for evaluations of tests targeted at the primary care setting, how evidence from differing healthcare settings was incorporated. Results: The bivariate or HSROC model was implemented in 20/22 reports that met all inclusion criteria. Test accuracy data for health economic modelling was obtained from meta-analyses completely in four reports, partially in fourteen reports and not at all in four reports. Only 2/7 reports that used a quantitative test gave clear threshold recommendations. All 22 reports explored the effect of uncertainty in accuracy parameters but most of those that used multiple tests did not allow for dependence between test results. 7/22 tests were potentially suitable for primary care but the majority found limited evidence on test accuracy in primary care settings. Conclusions: The uptake of appropriate meta-analysis methods for synthesising evidence on diagnostic test accuracy in UK NIHR HTAs has improved in recent years. Future research should focus on other evidence requirements for cost-effectiveness assessment, threshold effects for quantitative tests and the impact of multiple diagnostic tests

    UX evaluation design of UTAssistant: A new usability testing support tool for italian public administrations

    No full text
    Since 2012, usability testing in Italian public administration (PA) has been guided by the eGLU 2.1 technical protocols, which provide a set of principles and procedures to support specialized usability assessments in a controlled and predictable way. This paper describes a new support tool for usability testing that aims to facilitate the application of eGLU 2.1 and the design of its User eXperience (UX) evaluation methodology. The usability evaluation tool described in this paper is called UTAssistant (Usability Tool Assistant). UTAssistant has been entirely developed as a Web platform, supporting evaluators in designing usability tests, analyzing the data gathered during the test and aiding Web users step-by-step to complete the tasks required by an evaluator. It also provides a library of questionnaires to be administered to Web users at the end of the usability test. The UX evaluation methodology adopted to assess the UTAssistant platform uses both standard and new bio-behavioral evaluation methods. From a technological point of view, UTAssistant is an important step forward in the assessment of Web services in PA, fostering a standardized procedure for usability testing without requiring dedicated devices, unlike existing software and platforms for usability testing
    corecore