2,120 research outputs found

    What counts as good evidence

    Get PDF
    Making better use of evidence is essential if public services are to deliver more for less. Central to this challenge is the need for a clearer understanding about standards of evidence that can be applied to the research informing social policy. This paper reviews the extent to which it is possible to reach a workable consensus on ways of identifying and labelling evidence. It does this by exploring the efforts made to date and the debates that have ensued. Throughout, the focus is on evidence that is underpinned by research, rather than other sources of evidence such as expert opinion or stakeholder views.Publisher PD

    Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in First-person Simulated 3D Environments

    Full text link
    First-person object-interaction tasks in high-fidelity, 3D, simulated environments such as the AI2Thor virtual home-environment pose significant sample-efficiency challenges for reinforcement learning (RL) agents learning from sparse task rewards. To alleviate these challenges, prior work has provided extensive supervision via a combination of reward-shaping, ground-truth object-information, and expert demonstrations. In this work, we show that one can learn object-interaction tasks from scratch without supervision by learning an attentive object-model as an auxiliary task during task learning with an object-centric relational RL agent. Our key insight is that learning an object-model that incorporates object-attention into forward prediction provides a dense learning signal for unsupervised representation learning of both objects and their relationships. This, in turn, enables faster policy learning for an object-centric relational RL agent. We demonstrate our agent by introducing a set of challenging object-interaction tasks in the AI2Thor environment where learning with our attentive object-model is key to strong performance. Specifically, we compare our agent and relational RL agents with alternative auxiliary tasks to a relational RL agent equipped with ground-truth object-information, and show that learning with our object-model best closes the performance gap in terms of both learning speed and maximum success rate. Additionally, we find that incorporating object-attention into an object-model's forward predictions is key to learning representations which capture object-category and object-state

    The Accelerator, Volume 2 Issue 4, Summer 2009

    Get PDF

    Technology-Enhanced Reading Therapy for People With Aphasia: Findings From a Quasirandomized Waitlist Controlled Study.

    Get PDF
    Purpose This study investigated the effects of technology-enhanced reading therapy for people with reading impairments, using mainstream assistive reading technologies alongside reading strategies. Method The study used a quasirandomized waitlist controlled design. Twenty-one people with reading impairments following stroke were randomly assigned to receive 14 hr of therapy immediately or after a 6-week delay. During therapy, participants were trained to use assistive reading technology that offered a range of features to support reading comprehension. They developed skills in using the technology independently and in applying the technology to their personal reading goals. The primary outcome measure assessed reading comprehension, using Gray Oral Reading Test-Fourth Edition (GORT-4). Secondary measures were as follows: Reading Comprehension Battery for Aphasia-Second Edition, Reading Confidence and Emotions Questionnaire, Communication Activities of Daily Living-Second Edition, Visual Analog Mood Scales, and Assessment of Living With Aphasia. Matched texts were used with the GORT-4 to compare technology-assisted and unassisted reading comprehension. Mixed analyses of variance explored change between T1 and T2, when the immediate group had received therapy but the delayed group had not, thus serving as untreated controls. Pretherapy, posttherapy, and follow-up scores on the measures were also examined for all participants. Results GORT-4 results indicated that the immediately treated group improved significantly in technology-assisted reading following therapy, but not in unassisted reading. However, the data were not normally distributed, and secondary nonparametric analysis was not significant. The control group was unstable over the baseline, improving significantly in unassisted reading. The whole-group analysis showed significant gains in assisted (but not unassisted) reading after therapy that were maintained at follow-up. The Reading Confidence and Emotions Questionnaire results improved significantly following therapy, with good maintenance of change. Results on all other secondary measures were not significant. Conclusions Technology-assisted reading comprehension improved following the intervention, with treatment compensating for, rather than remediating, the reading impairment. Participants' confidence and emotions associated with reading also improved. Gains were achieved after 14 therapy sessions, using assistive technologies that are widely available and relatively affordable, meaning that this approach could be implemented in clinical practice
    • …
    corecore