3 research outputs found

    Investigation of Low-Cost Wearable Internet of Things Enabled Technology for Physical Activity Recognition in the Elderly

    Get PDF
    Technological advances in mobile sensing technologies has produced new opportunities for the monitoring of the elderly in uncontrolled environments by researchers. Sensors have become smaller, cheaper and can be worn on the body, potentially creating a network of sensors. Smart phones are also more common in the average household and can also provide some behavioural analysis due to the built-in sensors. As a result of this, researchers are able to monitor behaviours in a more naturalistic setting, which can lead to more contextually meaningful data. For those suffering with a mental illness, non-invasive and continuous monitoring can be achieved. Applying sensors to real world environments can aid in improving the quality of life of an elderly person with a mental illness and monitor their condition through behavioural analysis. In order to achieve this, selected classifiers must be able to accurately detect when an activity has taken place. In this thesis we aim to provide a framework for the investigation of activity recognition in the elderly using low-cost wearable sensors, which has resulted in the following contributions: 1. Classification of eighteen activities which were broken down into three disparate categories typical in a home setting: dynamic, sedentary and transitional. These were detected using two Shimmer3 IMU devices that we have located on the participants’ wrist and waist to create a low-cost, contextually deployable solution for elderly care monitoring. 2. Through the categorisation of performed Extracted time-domain and frequency-domain features from the Shimmer devices accelerometer and gyroscope were used as inputs, we achieved a high accuracy classification from a Convolutional Neural Network (CNN) model applied to the data set gained from participants recruited to the study through Join Dementia Research. The model was evaluated by variable adjustments to the model, tracking changes in its performance. Performance statistics were generated by the model for comparison and evaluation. Our results indicate that a low epoch of 200 using the ReLu activation function can display a high accuracy of 86% on the wrist data set and 85% on the waist data set, using only two low-cost wearable devices

    Human Behaviour Recognition based on Trajectory Analysis using Neural Networks

    Get PDF
    Automated human behaviour analysis has been, and still remains, a challenging problem. It has been dealt from different points of views: from primitive actions to human interaction recognition. This paper is focused on trajectory analysis which allows a simple high level understanding of complex human behaviour. It is proposed a novel representation method of trajectory data, called Activity Description Vector (ADV) based on the number of occurrences of a person is in a specific point of the scenario and the local movements that perform in it. The ADV is calculated for each cell of the scenario in which it is spatially sampled obtaining a cue for different clustering methods. The ADV representation has been tested as the input of several classic classifiers and compared to other approaches using CAVIAR dataset sequences obtaining great accuracy in the recognition of the behaviour of people in a Shopping Centre.This work was supported in part by the University of Alicante under Grant GRE11-01

    Individual and group dynamic behaviour patterns in bound spaces

    Get PDF
    The behaviour analysis of individual and group dynamics in closed spaces is a subject of extensive research in both academia and industry. However, despite recent technological advancements the problem of implementing the existing methods for visual behaviour data analysis in production systems remains difficult and the applications are available only in special cases in which the resourcing is not a problem. Most of the approaches concentrate on direct extraction and classification of the visual features from the video footage for recognising the dynamic behaviour directly from the source. The adoption of such an approach allows recognising directly the elementary actions of moving objects, which is a difficult task on its own. The major factor that impacts the performance of the methods for video analytics is the necessity to combine processing of enormous volume of video data with complex analysis of this data using and computationally resourcedemanding analytical algorithms. This is not feasible for many applications, which must work in real time. In this research, an alternative simulation-based approach for behaviour analysis has been adopted. It can potentially reduce the requirements for extracting information from real video footage for the purpose of the analysis of the dynamic behaviour. This can be achieved by combining only limited data extracted from the original video footage with a symbolic data about the events registered on the scene, which is generated by 3D simulation synchronized with the original footage. Additionally, through incorporating some physical laws and the logics of dynamic behaviour directly in the 3D model of the visual scene, this framework allows to capture the behavioural patterns using simple syntactic pattern recognition methods. The extensive experiments with the prototype implementation prove in a convincing manner that the 3D simulation generates sufficiently rich data to allow analysing the dynamic behaviour in real-time with sufficient adequacy without the need to use precise physical data, using only a limited data about the objects on the scene, their location and dynamic characteristics. This research can have a wide applicability in different areas where the video analytics is necessary, ranging from public safety and video surveillance to marketing research to computer games and animation. Its limitations are linked to the dependence on some preliminary processing of the video footage which is still less detailed and computationally demanding than the methods which use directly the video frames of the original footage
    corecore