129 research outputs found

    Fuzzy Finite State Machine for human activity modelling and recognition

    Get PDF
    Independent living is a housing arrangement designed exclusively for older adults to support them with their Activity of Daily Living (ADL) in a safe and secure environment. The provision of independent living would reduce the cost of social care while elderly residents are kept in their own homes. Therefore, there is a need for an automated system to monitor the residents to be able to understand their activities and only when abnormal activities are identified, provide human support to resolve the issue. Three main approaches are used for gathering data representing the human’s activities; ambient sensory device-based, wearable sensory device-based and camera vision device-based. Ambient sensory devices-based systems use sensors such as Passive Infra-Red (PIR) and door entry sensors to capture a user’s presence or absence within a specific area and record them as binary information. Gathering data using these sensory devices are widely accepted, as they are unobtrusive and it does not affect the ADLs. However, wearable sensory devices-based and camera vision device-based approaches are undesirable to many users especially for the older adults users as they more often forget to wear them and due to some privacy concerns. Recognising and modelling human activities from unobtrusive sensors is a topic addressed in Ambient Intelligence (AmI) research. The research proposed in this thesis aims to recognise and model human activities in an indoor environment based on ambient sensory device-based data. Different methods including statistical, machine learning and deep learning techniques are already researched to address the challenges of recognising and modelling human activities. The research in this thesis is mainly focusing on the application of Fuzzy Finite State Machine (FFSM) for human activities modelling and proposes ways for enhancing the FFSM performance to improve the accuracy of human activity modelling. In this thesis, three novel contributions are made which are outlined as follows; Firstly, a framework is proposed for combining the learning abilities of Neural Networks (NNs), Long Short-Term Memory (LSTM) neural network and Convolutional Neural Networks (CNNs) with the existing FFSM for human activity modelling and recognition. These models are referred to as NN-FFSM, LSTM-FFSM and CNN-FFSM. Secondly, to obtain the optimal feature representation from the acquired sensory information, relevant features are extracted and fuzzified with the selected membership degrees, these features are then applied to the different enhanced FFSM models. Thirdly, binary data gathered from the ambient sensors including PIR and door entry sensors are represented as greyscale images. A pre-trained Deep Convolutional Neural Network (DCNN) such as AlexNet is used to select and extract features from the generated greyscale image for each activity. The selected features are then used as inputs to Adaptive Boosting (AdaBoost) and Fuzzy C-means (FCM) classifiers for modelling and recognising the ADL for a single user. The proposed enhanced FFSM models were tested and evaluated using two different datasets representing the ADL for a single user. The first dataset was collected at the Smart Home facilities at NTU and the second dataset is a public dataset collected from CASAS smart home project

    Sensor-based human activity recognition: Overcoming issues in a real world setting

    Get PDF
    The rapid growing of the population age in industrialized societies calls for advanced tools to continuous monitor the activities of people. The goals of those tools are usually to support active and healthy ageing, and to early detect possible health issues to enable a long and independent life. Recent advancements in sensor miniaturization and wireless communications have paved the way to unobtrusive activity recognition systems. Hence, many pervasive health care systems have been proposed which monitor activities through unobtrusive sensors and by machine learning or artificial intelligence methods. Unfortunately, while those systems are effective in controlled environments, their actual effectiveness out of the lab is still limited due to different shortcomings of existing approaches. In this work, we explore such systems and aim to overcome existing limitations and shortcomings. Focusing on physical movements and crucial activities, our goal is to develop robust activity recognition methods based on external and wearable sensors that generate high quality results in a real world setting. Under laboratory conditions, existing research already showed that wearable sensors are suitable to recognize physical activities while external sensors are promising for activities that are more complex. Consequently, we investigate problems that emerge when coming out of the lab. This includes the position handling of wearable devices, the need of large expensive labeled datasets, the requirement to recognize activities in almost real-time, the necessity to adapt deployed systems online to changes in behavior of the user, the variability of executing an activity, and to use data and models across people. As a result, we present feasible solutions for these problems and provide useful insights for implementing corresponding techniques. Further, we introduce approaches and novel methods for both external and wearable sensors where we also clarify limitations and capabilities of the respective sensor types. Thus, we investigate both types separately to clarify their contribution and application use in respect of recognizing different types of activities in a real world scenario. Overall, our comprehensive experiments and discussions show on the one hand the feasibility of physical activity recognition but also recognizing complex activities in a real world scenario. Comparing our techniques and results with existing works and state-of-the-art techniques also provides evidence concerning the reliability and quality of the proposed techniques. On the other hand, we also identify promising research directions and highlight that combining external and wearable sensors seem to be the next step to go beyond activity recognition. In other words, our results and discussions also show that combining external and wearable sensors would compensate weaknesses of the individual sensors in respect of certain activity types and scenarios. Therefore, by addressing the outlined problems, we pave the way for a hybrid approach. Along with our presented solutions, we conclude our work with a high-level multi-tier activity recognition architecture showing that aspects like physical activity, (emotional) condition, used objects, and environmental features are critical for reliable recognizing complex activities

    Brain-Computer Interface

    Get PDF
    Brain-computer interfacing (BCI) with the use of advanced artificial intelligence identification is a rapidly growing new technology that allows a silently commanding brain to manipulate devices ranging from smartphones to advanced articulated robotic arms when physical control is not possible. BCI can be viewed as a collaboration between the brain and a device via the direct passage of electrical signals from neurons to an external system. The book provides a comprehensive summary of conventional and novel methods for processing brain signals. The chapters cover a range of topics including noninvasive and invasive signal acquisition, signal processing methods, deep learning approaches, and implementation of BCI in experimental problems

    Sequential learning and shared representation for sensor-based human activity recognition

    Get PDF
    Human activity recognition based on sensor data has rapidly attracted considerable research attention due to its wide range of applications including senior monitoring, rehabilitation, and healthcare. These applications require accurate systems of human activity recognition to track and understand human behaviour. Yet, developing such accurate systems pose critical challenges and struggle to learn from temporal sequential sensor data due to the variations and complexity of human activities. The main challenges of developing human activity recognition are accuracy and robustness due to the diversity and similarity of human activities, skewed distribution of human activities, and also lack of a rich quantity of wellcurated human activity data. This thesis addresses these challenges by developing robust deep sequential learning models to boost the performance of human activity recognition and handle the imbalanced class problems as well as reduce the need for a large amount of annotated data. This thesis develops a set of new networks specifically designed for the challenges in building better HAR systems compared to the existing methods. First, this thesis proposes robust and sequential deep learning models to accurately recognise human activities and boost the performance of the human activity recognition systems against the current methods from smart home and wearable sensors collected data. The proposed methods integrate convolutional neural networks and different attention mechanisms to efficiently process human activity data and capture significant information for recognising human activities. Next, the thesis proposes methods to address the imbalanced class problems for human activity recognition systems. Joint learning of sequential deep learning algorithms, i.e., long short-term memory and convolutional neural networks is proposed to boost the performance of human activity recognition, particularly for infrequent human activities. In addition to that, also propose a data-level solution to address imbalanced class problems by extending the synthetic minority over-sampling technique (SMOTE) which we named (iSMOTE) to accurately label the generated synthetic samples. These methods have enhanced the results of the minority human activities and outperformed the current state-of-the-art methods. In this thesis, sequential deep learning networks are proposed to boost the performance of human activity recognition in addition to reducing the dependency for a rich quantity of well-curated human activity data by transfer learning techniques. A multi-domain learning network is proposed to process data from multi-domains, transfer knowledge across different but related domains of human activities and mitigate isolated learning paradigms using a shared representation. The advantage of the proposed method is firstly to reduce the need and effort for labelled data of the target domain. The proposed network uses the training data of the target domain with restricted size and the full training data of the source domain, yet provided better performance than using the full training data in a single domain setting. Secondly, the proposed method can be used for small datasets. Lastly, the proposed multidomain learning network reduces the training time by rendering a generic model for related domains compared to fitting a model for each domain separately. In addition, the thesis also proposes a self-supervised model to reduce the need for a considerable amount of annotated human activity data. The self-supervised method is pre-trained on the unlabeled data and fine-tuned on a small amount of labelled data for supervised learning. The proposed self-supervised pre-training network renders human activity representations that are semantically meaningful and provides a good initialization for supervised fine tuning. The developed network enhances the performance of human activity recognition in addition to minimizing the need for a considerable amount of labelled data. The proposed models are evaluated by multiple public and benchmark datasets of sensorbased human activities and compared with the existing state-of-the-art methods. The experimental results show that the proposed networks boost the performance of human activity recognition systems

    Behaviour Profiling using Wearable Sensors for Pervasive Healthcare

    Get PDF
    In recent years, sensor technology has advanced in terms of hardware sophistication and miniaturisation. This has led to the incorporation of unobtrusive, low-power sensors into networks centred on human participants, called Body Sensor Networks. Amongst the most important applications of these networks is their use in healthcare and healthy living. The technology has the possibility of decreasing burden on the healthcare systems by providing care at home, enabling early detection of symptoms, monitoring recovery remotely, and avoiding serious chronic illnesses by promoting healthy living through objective feedback. In this thesis, machine learning and data mining techniques are developed to estimate medically relevant parameters from a participant‘s activity and behaviour parameters, derived from simple, body-worn sensors. The first abstraction from raw sensor data is the recognition and analysis of activity. Machine learning analysis is applied to a study of activity profiling to detect impaired limb and torso mobility. One of the advances in this thesis to activity recognition research is in the application of machine learning to the analysis of 'transitional activities': transient activity that occurs as people change their activity. A framework is proposed for the detection and analysis of transitional activities. To demonstrate the utility of transition analysis, we apply the algorithms to a study of participants undergoing and recovering from surgery. We demonstrate that it is possible to see meaningful changes in the transitional activity as the participants recover. Assuming long-term monitoring, we expect a large historical database of activity to quickly accumulate. We develop algorithms to mine temporal associations to activity patterns. This gives an outline of the user‘s routine. Methods for visual and quantitative analysis of routine using this summary data structure are proposed and validated. The activity and routine mining methodologies developed for specialised sensors are adapted to a smartphone application, enabling large-scale use. Validation of the algorithms is performed using datasets collected in laboratory settings, and free living scenarios. Finally, future research directions and potential improvements to the techniques developed in this thesis are outlined
    • …
    corecore