41 research outputs found

    Tracking changes in user activity from unlabelled smart home sensor data using unsupervised learning methods

    Get PDF
    © 2020, The Author(s). This paper investigates the utility of unsupervised machine learning and data visualisation for tracking changes in user activity over time. This is done through analysing unlabelled data generated from passive and ambient smart home sensors, such as motion sensors, which are considered less intrusive than video cameras or wearables. The challenge in using unlabelled passive and ambient sensors data for activity recognition is to find practical methods that can provide meaningful information to support timely interventions based on changing user needs, without the overhead of having to label the data over long periods of time. The paper addresses this challenge to discover patterns in unlabelled sensor data using kernel density estimation (KDE) for pre-processing the data, together with t-distributed stochastic neighbour embedding and uniform manifold approximation and projection for visualising changes. The methodology is developed and tested on the Aruba CASAS smart home dataset and focusses on discovering and tracking changes in kitchen-based activities. The traditional approach of using sliding windows to segment the data requires a priori knowledge of the temporal characteristics of activities being identified. In this paper, we show how an adaptive approach for segmentation, KDE, is a suitable alternative for identifying temporal clusters of sensor events from unlabelled data that can represent an activity. The ability to visualise different recurring patterns of activity and changes to these over time is illustrated by mapping the data for separate days of the week. The paper then demonstrates how this can be used to track patterns over longer time-frames which could be used tohelp highlight differences in the user’s day-to-day behaviour. By presenting the data in a format that can be visually reviewed for temporal changes in activity over varying periods of time from unlabelled sensor data, opens up the opportunity for carers to then initiate further enquiry if variations to previous patterns are noted. This is seen as an accessible first step to enable carers to initiate informed discussions with the service user to understand what may be causing these changes and suggest appropriate interventions if the change is found to be detrimental to their well-being

    Project SAM: Developing an app to provide self-help for anxiety

    Get PDF
    An interdisciplinary team at the University of the West of England (UWE) was commissioned and funded to develop a mobile phone app which would provide self-help options for the management of mild to moderate anxiety. The completed app would extend the range and availability of psychological support for student well-being at UWE and other higher education institutions.The project team consisted of two computer scientists and one psychologist who were responsible for the technical, functional and clinical specification of the app. A local mobile app development company was appointed and the teams collaborated on the design, build and evaluation of the app. The self-help structure and components were developed in consultation with therapeutic practitioners, in and out of UWE. The developer team advised on and constructed multi-media features to realise the self-help aims of the app.The UWE project team promoted an iterative approach to development, evaluating each stage of development through trials with expert users, practitioners and students. The app, named SAM (Self-help for Anxiety Management), was developed for Apple and Android operating systems, to be usable on smartphones and tablets. SAM was launched in the app stores in July 2013, globally available and free to download for the first year of operation. It was promoted to students, educational institutions, mental health organisations and charities as well as a range of professional and informal contacts. A UWE-based Advisory Board was convened to oversee the maintenance and development of the university’s investment in SAM. Members include the project team, researchers, therapists and other staff with an interest in its use to support student well-being. Three key tasks of the Board are to ensure SAM’s financial sustainability, to oversee developments in its usability and self-help components, and to obtain funding for the evaluation of its therapeutic impact

    'Elbows Out' - Predictive tracking of partially occluded pose for Robot-Assisted dressing

    Get PDF
    © 2016 IEEE. Robots that can assist in the activities of daily living, such as dressing, may support older adults, addressing the needs of an aging population in the face of a growing shortage of care professionals. Using depth cameras during robot-assisted dressing can lead to occlusions and loss of user tracking, which may result in unsafe trajectory planning or prevent the planning task proceeding altogether. For the dressing task of putting on a jacket, which is addressed in this letter, tracking of the arm is lost when the user's hand enters the jacket, which may lead to unsafe situations for the user and a poor interaction experience. Using motion tracking data, free from occlusions, gathered from a human-human interaction study on an assisted dressing task, recurrent neural network models were built to predict the elbow position of a single arm based on other features of the user pose. The best features for predicting the elbow position were explored by using regression trees indicating the hips and shoulder as possible predictors. Engineered features were also created based on observations of real dressing scenarios and their effectiveness explored. Comparison between position and orientation-based datasets was also included in this study. A 12-fold cross-validation was performed for each feature set and repeated 20 times to improve statistical power. Using position-based data, the elbow position could be predicted with a 4.1 cm error but adding engineered features reduced the error to 2.4 cm. Adding orientation information to the data did not improve the accuracy and aggregating univariate response models failed to make significant improvements. The model was evaluated on Kinect data for a robot dressing task and although not without issues, demonstrates potential for this application. Although this has been demonstrated for jacket dressing, the technique could be applied to a number of different situations during occluded tracking

    An assistive robot to support dressing-strategies for planning and error handling

    Get PDF
    © 2016 IEEE. Assistive robots are emerging to address a social need due to changing demographic trends such as an ageing population. The main emphasis is to offer independence to those in need and to fill a potential labour gap in response to the increasing demand for caregiving. This paper presents work undertaken as part of a dressing task using a compliant robotic arm on a mannequin. Several strategies are explored on how to undertake this task with minimal complexity and a mix of sensors. A Vicon tracking system is used to determine the arm position of the mannequin for trajectory planning by means of waypoints. Methods of failure detection were explored through torque feedback and sensor tag data. A fixed vocabulary of recognised speech commands was implemented allowing the user to successfully correct detected dressing errors. This work indicates that low cost sensors and simple HRI strategies, without complex learning algorithms, could be used successfully in a robot assisted dressing task

    Safety assessment review of a dressing assistance robot

    Get PDF
    Hazard analysis methods such as HAZOP and STPA have proven to be effective methods for assurance of system safety for years. However, the dimensionality and human factors uncertainty of many assistive robotic applications challenges the capability of these methods to provide comprehensive coverage of safety issues from interdisciplinary perspectives in a timely and cost-effective manner. Physically assistive tasks in which a range of dynamic contexts require continuous human–robot physical interaction such as e.g., robot-assisted dressing or sit-to-stand pose a new paradigm for safe design and safety analysis methodology. For these types of tasks, considerations have to be made for a range of dynamic contexts where the robot-assistance requires close and continuous physical contact with users. Current regulations mainly cover industrial collaborative robotics regarding physical human–robot interaction (pHRI) but largely neglects direct and continuous physical human contact. In this paper, we explore limitations of commonly used safety analysis techniques when applied to robot-assisted dressing scenarios. We provide a detailed analysis of the system requirements from the user perspective and consider user-bounded hazards that can compromise safety of this complex pHRI

    Empowering future care workforces: Scoping Capabilities to Leverage Assistive Robotics through Co-Design

    Get PDF
    Project aims: Understand how health and social care professionals can benefit from using assistive robotics on their own terms. Specify capabilities that matter to professionals, service users /carers. Scope a framework for co-designing assistive robotics that forefronts health and social care professionals and service users

    Unsupervised machine learning for developing personalised behaviour models using activity data

    Get PDF
    © 2017 by the authors. Licensee MDPI, Basel, Switzerland. The goal of this study is to address two major issues that undermine the large scale deployment of smart home sensing solutions in people’s homes. These include the costs associated with having to install and maintain a large number of sensors, and the pragmatics of annotating numerous sensor data streams for activity classification. Our aim was therefore to propose a method to describe individual users’ behavioural patterns starting from unannotated data analysis of a minimal number of sensors and a ”blind” approach for activity recognition. The methodology included processing and analysing sensor data from 17 older adults living in community-based housing to extract activity information at different times of the day. The findings illustrate that 55 days of sensor data from a sensor configuration comprising three sensors, and extracting appropriate features including a “busyness” measure, are adequate to build robust models which can be used for clustering individuals based on their behaviour patterns with a high degree of accuracy (>85%). The obtained clusters can be used to describe individual behaviour over different times of the day. This approach suggests a scalable solution to support optimising the personalisation of care by utilising low-cost sensing and analysis. This approach could be used to track a person’s needs over time and fine-tune their care plan on an ongoing basis in a cost-effective manner

    Designing ethical social robots - A longitudinal field study with older adults

    Get PDF
    Emotional deception and emotional attachment are regarded as ethical concerns in human robot interaction. Considering these concerns is essential, particularly as little is known about longitudinal effects of interactions with social robots. We ran a longitudinal user study with older adults in two retirement villages, where people interacted with a robot in a didactic setting for eight sessions over a period of four weeks. The robot would show either non-emotive or emotive behavior during these interactions in order to investigate emotional deception. Questionnaires were given to investigate participants’ acceptance of the robot, perception of the social interactions with the robot and attachment to the robot. Results show that the robot’s behavior did not seem to influence participants’ acceptance of the robot, perception of the interaction or attachment to the robot. Time did not appear to influence participants’ level of attachment to the robot, which ranged from low to medium. The perceived ease of using the robot significantly increased over time. These findings indicate that a robot showing emotions (and perhaps resulting in users being deceived) in a didactic setting may not by default negatively influence participants’ acceptance and perception of the robot, and that older adults may not become distressed if the robot would break or be taken away from them, as attachment to the robot in this didactic setting was not high. However, more research is required as there may be other factors influencing these ethical concerns, and support through other measurements than questionnaires are required to be able to draw conclusions regarding these concerns

    Assessing the Role of Gaze Tracking in Optimizing Humans-In-The-Loop Telerobotic Operation Using Multimodal Feedback

    Get PDF
    A key challenge in achieving effective robot teleoperation is minimizing teleoperators’ cognitive workload and fatigue. We set out to investigate the extent to which gaze tracking data can reveal how teleoperators interact with a system. In this study, we present an analysis of gaze tracking, captured as participants completed a multi-stage task: grasping and emptying the contents of a jar into a container. The task was repeated with different combinations of visual, haptic, and verbal feedback. Our aim was to determine if teleoperation workload can be inferred by combining the gaze duration, fixation count, task completion time, and complexity of robot motion (measured as the sum of robot joint steps) at different stages of the task. Visual information of the robot workspace was captured using four cameras, positioned to capture the robot workspace from different angles. These camera views (aerial, right, eye-level, and left) were displayed through four quadrants (top-left, top-right, bottom-left, and bottom-right quadrants) of participants’ video feedback computer screen, respectively. We found that the gaze duration and the fixation count were highly dependent on the stage of the task and the feedback scenario utilized. The results revealed that combining feedback modalities reduced the cognitive workload (inferred by investigating the correlation between gaze duration, fixation count, task completion time, success or failure of task completion, and robot gripper trajectories), particularly in the task stages that require more precision. There was a significant positive correlation between gaze duration and complexity of robot joint movements. Participants’ gaze outside the areas of interest (distractions) was not influenced by feedback scenarios. A learning effect was observed in the use of the controller for all participants as they repeated the task with different feedback combination scenarios. To design a system for teleoperation, applicable in healthcare, we found that the analysis of teleoperators’ gaze can help understand how teleoperators interact with the system, hence making it possible to develop the system from the teleoperators’ stand point
    corecore