1,034 research outputs found

    A Hybrid Hierarchical Framework for Gym Physical Activity Recognition and Measurement Using Wearable Sensors

    Get PDF
    Due to the many beneficial effects on physical and mental health and strong association with many fitness and rehabilitation programs, physical activity (PA) recognition has been considered as a key paradigm for internet of things (IoT) healthcare. Traditional PA recognition techniques focus on repeated aerobic exercises or stationary PA. As a crucial indicator in human health, it covers a range of bodily movement from aerobics to anaerobic that may all bring health benefits. However, existing PA recognition approaches are mostly designed for specific scenarios and often lack extensibility for application in other areas, thereby limiting their usefulness. In this paper, we attempt to detect more gym physical activities (GPAs) in addition to traditional PA using acceleration, A two layer recognition framework is proposed that can classify aerobic, sedentary and free weight activities, count repetitions and sets for the free weight exercises, and in the meantime, measure quantities of repetitions and sets for free weight activities. In the first layer, a one-class SVM (OC-SVM) is applied to coarsely classify free weight and non-free weight activities. In the second layer, a neural network (NN) is utilized for aerobic and sedentary activities recognition; a hidden Markov model (HMM) is to provide a further classification in free weight activities. The performance of the framework was tested on 10 healthy subjects (age: 30 ± 5; BMI: 25 ± 5.5 kg/ and compared with some typical classifiers. The results indicate the proposed framework has better performance in recognizing and measuring GPAs than other approaches. The potential of this framework can be potentially extended in supporting more types of PA recognition in complex applications

    Inferring Complex Activities for Context-aware Systems within Smart Environments

    Get PDF
    The rising ageing population worldwide and the prevalence of age-related conditions such as physical fragility, mental impairments and chronic diseases have significantly impacted the quality of life and caused a shortage of health and care services. Over-stretched healthcare providers are leading to a paradigm shift in public healthcare provisioning. Thus, Ambient Assisted Living (AAL) using Smart Homes (SH) technologies has been rigorously investigated to help address the aforementioned problems. Human Activity Recognition (HAR) is a critical component in AAL systems which enables applications such as just-in-time assistance, behaviour analysis, anomalies detection and emergency notifications. This thesis is aimed at investigating challenges faced in accurately recognising Activities of Daily Living (ADLs) performed by single or multiple inhabitants within smart environments. Specifically, this thesis explores five complementary research challenges in HAR. The first study contributes to knowledge by developing a semantic-enabled data segmentation approach with user-preferences. The second study takes the segmented set of sensor data to investigate and recognise human ADLs at multi-granular action level; coarse- and fine-grained action level. At the coarse-grained actions level, semantic relationships between the sensor, object and ADLs are deduced, whereas, at fine-grained action level, object usage at the satisfactory threshold with the evidence fused from multimodal sensor data is leveraged to verify the intended actions. Moreover, due to imprecise/vague interpretations of multimodal sensors and data fusion challenges, fuzzy set theory and fuzzy web ontology language (fuzzy-OWL) are leveraged. The third study focuses on incorporating uncertainties caused in HAR due to factors such as technological failure, object malfunction, and human errors. Hence, existing studies uncertainty theories and approaches are analysed and based on the findings, probabilistic ontology (PR-OWL) based HAR approach is proposed. The fourth study extends the first three studies to distinguish activities conducted by more than one inhabitant in a shared smart environment with the use of discriminative sensor-based techniques and time-series pattern analysis. The final study investigates in a suitable system architecture with a real-time smart environment tailored to AAL system and proposes microservices architecture with sensor-based off-the-shelf and bespoke sensing methods. The initial semantic-enabled data segmentation study was evaluated with 100% and 97.8% accuracy to segment sensor events under single and mixed activities scenarios. However, the average classification time taken to segment each sensor events have suffered from 3971ms and 62183ms for single and mixed activities scenarios, respectively. The second study to detect fine-grained-level user actions was evaluated with 30 and 153 fuzzy rules to detect two fine-grained movements with a pre-collected dataset from the real-time smart environment. The result of the second study indicate good average accuracy of 83.33% and 100% but with the high average duration of 24648ms and 105318ms, and posing further challenges for the scalability of fusion rule creations. The third study was evaluated by incorporating PR-OWL ontology with ADL ontologies and Semantic-Sensor-Network (SSN) ontology to define four types of uncertainties presented in the kitchen-based activity. The fourth study illustrated a case study to extended single-user AR to multi-user AR by combining RFID tags and fingerprint sensors discriminative sensors to identify and associate user actions with the aid of time-series analysis. The last study responds to the computations and performance requirements for the four studies by analysing and proposing microservices-based system architecture for AAL system. A future research investigation towards adopting fog/edge computing paradigms from cloud computing is discussed for higher availability, reduced network traffic/energy, cost, and creating a decentralised system. As a result of the five studies, this thesis develops a knowledge-driven framework to estimate and recognise multi-user activities at fine-grained level user actions. This framework integrates three complementary ontologies to conceptualise factual, fuzzy and uncertainties in the environment/ADLs, time-series analysis and discriminative sensing environment. Moreover, a distributed software architecture, multimodal sensor-based hardware prototypes, and other supportive utility tools such as simulator and synthetic ADL data generator for the experimentation were developed to support the evaluation of the proposed approaches. The distributed system is platform-independent and currently supported by an Android mobile application and web-browser based client interfaces for retrieving information such as live sensor events and HAR results

    Multimodal Wearable Intelligence for Dementia Care in Healthcare 4.0: A Survey

    Get PDF
    As a new revolution of Ubiquitous Computing and Internet of Things, multimodal wearable intelligence technique is rapidly becoming a new research topic in both academic and industrial fields. Owning to the rapid spread of wearable and mobile devices, this technique is evolving healthcare from traditional hub-based systems to more personalised healthcare systems. This trend is well-aligned with recent Healthcare 4.0 which is a continuous process of transforming the entire healthcare value chain to be preventive, precise, predictive and personalised, with significant benefits to elder care. But empowering the utility of multimodal wearable intelligence technique for elderly care like people with dementia is significantly challenging considering many issues, such as shortage of cost-effective wearable sensors, heterogeneity of wearable devices connected, high demand for interoperability, etc. Focusing on these challenges, this paper gives a systematic review of advanced multimodal wearable intelligence technologies for dementia care in Healthcare 4.0. One framework is proposed for reviewing the current research of wearable intelligence, and key enabling technologies, major applications, and successful case studies in dementia care, and finally points out future research trends and challenges in Healthcare 4.0

    Human-centred artificial intelligence for mobile health sensing:challenges and opportunities

    Get PDF
    Advances in wearable sensing and mobile computing have enabled the collection of health and well-being data outside of traditional laboratory and hospital settings, paving the way for a new era of mobile health. Meanwhile, artificial intelligence (AI) has made significant strides in various domains, demonstrating its potential to revolutionize healthcare. Devices can now diagnose diseases, predict heart irregularities and unlock the full potential of human cognition. However, the application of machine learning (ML) to mobile health sensing poses unique challenges due to noisy sensor measurements, high-dimensional data, sparse and irregular time series, heterogeneity in data, privacy concerns and resource constraints. Despite the recognition of the value of mobile sensing, leveraging these datasets has lagged behind other areas of ML. Furthermore, obtaining quality annotations and ground truth for such data is often expensive or impractical. While recent large-scale longitudinal studies have shown promise in leveraging wearable sensor data for health monitoring and prediction, they also introduce new challenges for data modelling. This paper explores the challenges and opportunities of human-centred AI for mobile health, focusing on key sensing modalities such as audio, location and activity tracking. We discuss the limitations of current approaches and propose potential solutions

    Radar for Assisted Living in the Context of Internet of Things for Health and Beyond

    Get PDF
    This paper discusses the place of radar for assisted living in the context of IoT for Health and beyond. First, the context of assisted living and the urgency to address the problem is described. The second part gives a literature review of existing sensing modalities for assisted living and explains why radar is an upcoming preferred modality to address this issue. The third section presents developments in machine learning that helps improve performances in classification especially with deep learning with a reflection on lessons learned from it. The fourth section introduces recent published work from our research group in the area that shows promise with multimodal sensor fusion for classification and long short-term memory applied to early stages in the radar signal processing chain. Finally, we conclude with open challenges still to be addressed in the area and open to future research directions in animal welfare

    Protocol for PD SENSORS:Parkinson’s Disease Symptom Evaluation in a Naturalistic Setting producing Outcomes measuRes using SPHERE technology. An observational feasibility study of multi-modal multi-sensor technology to measure symptoms and activities of daily living in Parkinson’s disease

    Get PDF
    Introduction The impact of disease-modifying agents on disease progression in Parkinson’s disease is largely assessed in clinical trials using clinical rating scales. These scales have drawbacks in terms of their ability to capture the fluctuating nature of symptoms while living in a naturalistic environment. The SPHERE (Sensor Platform for HEalthcare in a Residential Environment) project has designed a multi-sensor platform with multimodal devices designed to allow continuous, relatively inexpensive, unobtrusive sensing of motor, non-motor and activities of daily living metrics in a home or a home-like environment. The aim of this study is to evaluate how the SPHERE technology can measure aspects of Parkinson’s disease.Methods and analysis This is a small-scale feasibility and acceptability study during which 12 pairs of participants (comprising a person with Parkinson’s and a healthy control participant) will stay and live freely for 5 days in a home-like environment embedded with SPHERE technology including environmental, appliance monitoring, wrist-worn accelerometry and camera sensors. These data will be collected alongside clinical rating scales, participant diary entries and expert clinician annotations of colour video images. Machine learning will be used to look for a signal to discriminate between Parkinson’s disease and control, and between Parkinson’s disease symptoms ‘on’ and ‘off’ medications. Additional outcome measures including bradykinesia, activity level, sleep parameters and some activities of daily living will be explored. Acceptability of the technology will be evaluated qualitatively using semi-structured interviews.Ethics and dissemination Ethical approval has been given to commence this study; the results will be disseminated as widely as appropriate
    • …
    corecore