468 research outputs found

    A Novel Approach to Complex Human Activity Recognition

    Get PDF
    Human activity recognition is a technology that offers automatic recognition of what a person is doing with respect to body motion and function. The main goal is to recognize a person\u27s activity using different technologies such as cameras, motion sensors, location sensors, and time. Human activity recognition is important in many areas such as pervasive computing, artificial intelligence, human-computer interaction, health care, health outcomes, rehabilitation engineering, occupational science, and social sciences. There are numerous ubiquitous and pervasive computing systems where users\u27 activities play an important role. The human activity carries a lot of information about the context and helps systems to achieve context-awareness. In the rehabilitation area, it helps with functional diagnosis and assessing health outcomes. Human activity recognition is an important indicator of participation, quality of life and lifestyle. There are two classes of human activities based on body motion and function. The first class, simple human activity, involves human body motion and posture, such as walking, running, and sitting. The second class, complex human activity, includes function along with simple human activity, such as cooking, reading, and watching TV. Human activity recognition is an interdisciplinary research area that has been active for more than a decade. Substantial research has been conducted to recognize human activities, but, there are many major issues still need to be addressed. Addressing these issues would provide a significant improvement in different aspects of the applications of the human activity recognition in different areas. There has been considerable research conducted on simple human activity recognition, whereas, a little research has been carried out on complex human activity recognition. However, there are many key aspects (recognition accuracy, computational cost, energy consumption, mobility) that need to be addressed in both areas to improve their viability. This dissertation aims to address the key aspects in both areas of human activity recognition and eventually focuses on recognition of complex activity. It also addresses indoor and outdoor localization, an important parameter along with time in complex activity recognition. This work studies accelerometer sensor data to recognize simple human activity and time, location and simple activity to recognize complex activity

    Automated Quality Control for Sensor Based Symptom Measurement Performed Outside the Lab

    Get PDF
    The use of wearable sensing technology for objective, non-invasive and remote clinimetric testing of symptoms has considerable potential. However, the accuracy achievable with such technology is highly reliant on separating the useful from irrelevant sensor data. Monitoring patient symptoms using digital sensors outside of controlled, clinical lab settings creates a variety of practical challenges, such as recording unexpected user behaviors. These behaviors often violate the assumptions of clinimetric testing protocols, where these protocols are designed to probe for specific symptoms. Such violations are frequent outside the lab and affect the accuracy of the subsequent data analysis and scientific conclusions. To address these problems, we report on a unified algorithmic framework for automated sensor data quality control, which can identify those parts of the sensor data that are sufficiently reliable for further analysis. Combining both parametric and nonparametric signal processing and machine learning techniques, we demonstrate that across 100 subjects and 300 clinimetric tests from three different types of behavioral clinimetric protocols, the system shows an average segmentation accuracy of around 90%. By extracting reliable sensor data, it is possible to strip the data of confounding factors in the environment that may threaten reproducibility and replicability

    A spherical representation of sensors and a model based approach for classification of human activities

    Get PDF
    Physical inactivity is a leading risk factor in public health and inactive people are more vulnerable to having non-communicable diseases (NCDs), for example, autoimmune diseases, strokes, most heart diseases, diabetes, chronic kidney disease, and others. In addition, levels of physical activity may be an indicator of health problems in older adult individuals, a particular problem in many societies where there is a growing ratio of old adults age 65 and over. Identifying levels of physical activity may have a significant effect on fitness and reducing healthcare costs in the future. Thus, finding approaches for measuring the individuals’ activities is an important need, in order to provide information about their quality of life and to examine their current health status. This thesis explores the possibility of using low-cost wearable accelerometer based inertial sensors to determine activities during daily living. Two data sources were used for this investigation. The first was a locally collected data set recorded from individuals with Parkinson’s disease in their own homes where they were asked to stand up from their favourite chair and then do different daily activities (Bridge data set). The second was a data set collected in a movement laboratory of the Fredrich-Alexander university and measures 19 participants doing daily activities (sit, stand, washing dishes, sweeping, walking, etc) in controlled conditions (Benchmark data set). Both studies used accelerometer based measurements as these are widely used in wearable and portable technologies such as smartphones, and are now finding use in health care applications. Two areas of research are considered. In the first, accelerometer data were considered in relation to the surface of a sphere of radius 1g (i.e. magnitude of the acceleration due to earth gravitate). This research looked at sensor placement, window size and novel features based on the ‘gravity sphere’. Decision Trees and Na¨ıve Bayes classifiers were used as a baseline classifier on both data sets and k-Nearest Neighbour was used on the Bench Mark data set only. The classification results of a small set of activities of a single individual from first data set show that Na¨ıve Bayes (NB) had a better overall accuracy rate than Decision Trees (DTs), where the results are 85.41% and 78.56% for both NB and DTs respectively. The second area of work considered the possibility of using models of the dynamic system of the human movement as the basis for movement classification. Data from the accelerometers were used to evaluate two approaches that exploited the modelling capacity of a system identification algorithm. The two methods, which are called Prediction Measuring (PM) and Model Matching (MM), used the recursive least square method to identify a model for each class (activity). The Benchmark data set was used to verify the proposed methods. PM method achieved better classification accuracy comparing to MM method, with 71% and 59% respectively

    Behaviour Profiling using Wearable Sensors for Pervasive Healthcare

    Get PDF
    In recent years, sensor technology has advanced in terms of hardware sophistication and miniaturisation. This has led to the incorporation of unobtrusive, low-power sensors into networks centred on human participants, called Body Sensor Networks. Amongst the most important applications of these networks is their use in healthcare and healthy living. The technology has the possibility of decreasing burden on the healthcare systems by providing care at home, enabling early detection of symptoms, monitoring recovery remotely, and avoiding serious chronic illnesses by promoting healthy living through objective feedback. In this thesis, machine learning and data mining techniques are developed to estimate medically relevant parameters from a participant‘s activity and behaviour parameters, derived from simple, body-worn sensors. The first abstraction from raw sensor data is the recognition and analysis of activity. Machine learning analysis is applied to a study of activity profiling to detect impaired limb and torso mobility. One of the advances in this thesis to activity recognition research is in the application of machine learning to the analysis of 'transitional activities': transient activity that occurs as people change their activity. A framework is proposed for the detection and analysis of transitional activities. To demonstrate the utility of transition analysis, we apply the algorithms to a study of participants undergoing and recovering from surgery. We demonstrate that it is possible to see meaningful changes in the transitional activity as the participants recover. Assuming long-term monitoring, we expect a large historical database of activity to quickly accumulate. We develop algorithms to mine temporal associations to activity patterns. This gives an outline of the user‘s routine. Methods for visual and quantitative analysis of routine using this summary data structure are proposed and validated. The activity and routine mining methodologies developed for specialised sensors are adapted to a smartphone application, enabling large-scale use. Validation of the algorithms is performed using datasets collected in laboratory settings, and free living scenarios. Finally, future research directions and potential improvements to the techniques developed in this thesis are outlined

    System (for) Tracking Equilibrium and Determining Incline (STEADI)

    Get PDF
    The goal of this project was to design and implement a smartphone-based wearable system to detect fall events in real time. It has the acronym STEADI. Rather than have expensive customised hardware STEADI was implemented in a cost effective manner using a generic mobile computing device. In order to detect the fall event, we propose a fall detector that uses the accelerometer available in a mobile phone. As for detecting a fall we mainly divide the system in two sections, the signal processing and classification. For the processing both a median filter and a high pass filter are used. A Median filter is used to amplify/enhance the signal by removing impulsive noise while preserving the signal shape while the High pass filter is used to emphasise transitions in the signal. Then, in order to recognize a fall event, our STEADI system implements two methods that are a simple threshold analysis to determine whether or not a fall has occurred (threshold-based) and a more sophisticated Naïve-Bayes classification method to differentiate falling from other mobile activities. Our experimental results show that by applying the signal processing and Naïve-Bayes classification together increases the accuracy by more than 20% compared with using the threshold-based method alone. The Naïve-Bayes achieved a detection accuracy of 95% in overall. Furthermore, an external sensor is introduced in order to enhance its accuracy. In addition to the fall detection, the systems can also provide location information using Google Maps as to the whereabouts of the fall event using the available GPS on the smartphone and sends the message to the caretaker via an SMS
    • …
    corecore