5,797 research outputs found
Recommended from our members
Enhanced fuzzy finite state machine for human activity modelling and recognition
A challenging key aspect of modelling and recognising human activity is to design a model that can deal with the uncertainty in human behaviour. Several machine learning and deep learning techniques are employed to model the Activity of Daily Living (ADL) representing the human activity. This paper proposes an enhanced Fuzzy Finite State Machine (FFSM) model by combining the classical FFSM with Long Short-Term Memory (LSTM) neural network and Convolutional Neural Network (CNN). The learning capability in the LSTM and CNN allows the system to learn the relationship in the temporal human activity data and to identify the parameters of the rule-based system as building blocks of the FFSM through time steps in the learning mode. The learned parameters are then used for generating the fuzzy rules that govern the transitions between the system’s states representing activities. The proposed enhanced FFSMs were tested and evaluated using two different datasets; a real dataset collected by our research group and a public dataset collected from CASAS smart home project. Using LSTM-FFSM, the experimental results achieved 95.7% and 97.6% for the first dataset and the second dataset, respectively. Once CNN-FFSM was applied to both datasets, the obtained results were 94.2% and 99.3%, respectively
A Survey on Multi-Resident Activity Recognition in Smart Environments
Human activity recognition (HAR) is a rapidly growing field that utilizes
smart devices, sensors, and algorithms to automatically classify and identify
the actions of individuals within a given environment. These systems have a
wide range of applications, including assisting with caring tasks, increasing
security, and improving energy efficiency. However, there are several
challenges that must be addressed in order to effectively utilize HAR systems
in multi-resident environments. One of the key challenges is accurately
associating sensor observations with the identities of the individuals
involved, which can be particularly difficult when residents are engaging in
complex and collaborative activities. This paper provides a brief overview of
the design and implementation of HAR systems, including a summary of the
various data collection devices and approaches used for human activity
identification. It also reviews previous research on the use of these systems
in multi-resident environments and offers conclusions on the current state of
the art in the field.Comment: 16 pages, to appear in Evolution of Information, Communication and
Computing Systems (EICCS) Book Serie
Visualization as Intermediate Representations (VLAIR) for human activity recognition
Ambient, binary, event-driven sensor data is useful for many human activity recognition applications such as smart homes and ambient-assisted living. These sensors are privacy-preserving, unobtrusive, inexpensive and easy to deploy in scenarios that require detection of simple activities such as going to sleep, and leaving the house. However, classification performance is still a challenge, especially when multiple people share the same space or when different activities take place in the same areas. To improve classification performance we develop what we call a Visualization as Intermediate Representations (VLAIR) approach. The main idea is to re-represent the data as visualizations (generated pixel images) in a similar way as how visualizations are created for humans to analyze and communicate data. Then we can feed these images to a convolutional neural network whose strength resides in extracting effective visual features. We have tested five variants (mappings) of the VLAIR approach and compared them to a collection of classifiers commonly used in classic human activity recognition. The best of the VLAIR approaches outperforms the best baseline, with strong advantage in recognising less frequent activities and distinguishing users and activities in common areas. We conclude the paper with a discussion on why and how VLAIR can be useful in human activity recognition scenarios and beyond.Postprin
Recommended from our members
Fuzzy Finite State Machine for human activity modelling and recognition
Independent living is a housing arrangement designed exclusively for older adults to support them with their Activity of Daily Living (ADL) in a safe and secure environment. The provision of independent living would reduce the cost of social care while elderly residents are kept in their own homes. Therefore, there is a need for an automated system to monitor the residents to be able to understand their activities and only when abnormal activities are identified, provide human support to resolve the issue.
Three main approaches are used for gathering data representing the human’s activities; ambient sensory device-based, wearable sensory device-based and camera vision device-based. Ambient sensory devices-based systems use sensors such as Passive Infra-Red (PIR) and door entry sensors to capture a user’s presence or absence within a specific area and record them as binary information. Gathering data using these sensory devices are widely accepted, as they are unobtrusive and it does not affect the ADLs. However, wearable sensory devices-based and camera vision device-based approaches are undesirable to many users especially for the older adults users as they more often forget to wear them and due to some privacy concerns.
Recognising and modelling human activities from unobtrusive sensors is a topic addressed in Ambient Intelligence (AmI) research. The research proposed in this thesis aims to recognise and model human activities in an indoor environment based on ambient sensory device-based data. Different methods including statistical, machine learning and deep learning techniques are already researched to address the challenges of recognising and modelling human activities. The research in this thesis is mainly focusing on the application of Fuzzy Finite State Machine (FFSM) for human activities modelling and proposes ways for enhancing the FFSM performance to improve the accuracy of human activity modelling.
In this thesis, three novel contributions are made which are outlined as follows; Firstly, a framework is proposed for combining the learning abilities of Neural Networks (NNs), Long Short-Term Memory (LSTM) neural network and Convolutional Neural Networks (CNNs) with the existing FFSM for human activity modelling and recognition. These models are referred to as NN-FFSM, LSTM-FFSM and CNN-FFSM. Secondly, to obtain the optimal feature representation from the acquired sensory information, relevant features are extracted and fuzzified with the selected membership degrees, these features are then applied to the different enhanced FFSM models. Thirdly, binary data gathered from the ambient sensors including PIR and door entry sensors are represented as greyscale images. A pre-trained Deep Convolutional Neural Network (DCNN) such as AlexNet is used to select and extract features from the generated greyscale image for each activity. The selected features are then used as inputs to Adaptive Boosting (AdaBoost) and Fuzzy C-means (FCM) classifiers for modelling and recognising the ADL for a single user.
The proposed enhanced FFSM models were tested and evaluated using two different datasets representing the ADL for a single user. The first dataset was collected at the Smart Home facilities at NTU and the second dataset is a public dataset collected from CASAS smart home project
Complex Human Action Recognition in Live Videos Using Hybrid FR-DL Method
Automated human action recognition is one of the most attractive and
practical research fields in computer vision, in spite of its high
computational costs. In such systems, the human action labelling is based on
the appearance and patterns of the motions in the video sequences; however, the
conventional methodologies and classic neural networks cannot use temporal
information for action recognition prediction in the upcoming frames in a video
sequence. On the other hand, the computational cost of the preprocessing stage
is high. In this paper, we address challenges of the preprocessing phase, by an
automated selection of representative frames among the input sequences.
Furthermore, we extract the key features of the representative frame rather
than the entire features. We propose a hybrid technique using background
subtraction and HOG, followed by application of a deep neural network and
skeletal modelling method. The combination of a CNN and the LSTM recursive
network is considered for feature selection and maintaining the previous
information, and finally, a Softmax-KNN classifier is used for labelling human
activities. We name our model as Feature Reduction & Deep Learning based action
recognition method, or FR-DL in short. To evaluate the proposed method, we use
the UCF dataset for the benchmarking which is widely-used among researchers in
action recognition research. The dataset includes 101 complicated activities in
the wild. Experimental results show a significant improvement in terms of
accuracy and speed in comparison with six state-of-the-art articles
Inferring Complex Activities for Context-aware Systems within Smart Environments
The rising ageing population worldwide and the prevalence of age-related conditions such as physical fragility, mental impairments and chronic diseases have significantly impacted the quality of life and caused a shortage of health and care services. Over-stretched healthcare providers are leading to a paradigm shift in public healthcare provisioning. Thus, Ambient Assisted Living (AAL) using Smart Homes (SH) technologies has been rigorously investigated to help address the aforementioned problems.
Human Activity Recognition (HAR) is a critical component in AAL systems which enables applications such as just-in-time assistance, behaviour analysis, anomalies detection and emergency notifications. This thesis is aimed at investigating challenges faced in accurately recognising Activities of Daily Living (ADLs) performed by single or multiple inhabitants within smart environments. Specifically, this thesis explores five complementary research challenges in HAR. The first study contributes to knowledge by developing a semantic-enabled data segmentation approach with user-preferences. The second study takes the segmented set of sensor data to investigate and recognise human ADLs at multi-granular action level; coarse- and fine-grained action level. At the coarse-grained actions level, semantic relationships between the sensor, object and ADLs are deduced, whereas, at fine-grained action level, object usage at the satisfactory threshold with the evidence fused from multimodal sensor data is leveraged to verify the intended actions. Moreover, due to imprecise/vague interpretations of multimodal sensors and data fusion challenges, fuzzy set theory and fuzzy web ontology language (fuzzy-OWL) are leveraged. The third study focuses on incorporating uncertainties caused in HAR due to factors such as technological failure, object malfunction, and human errors. Hence, existing studies uncertainty theories and approaches are analysed and based on the findings, probabilistic ontology (PR-OWL) based HAR approach is proposed. The fourth study extends the first three studies to distinguish activities conducted by more than one inhabitant in a shared smart environment with the use of discriminative sensor-based techniques and time-series pattern analysis. The final study investigates in a suitable system architecture with a real-time smart environment tailored to AAL system and proposes microservices architecture with sensor-based off-the-shelf and bespoke sensing methods.
The initial semantic-enabled data segmentation study was evaluated with 100% and 97.8% accuracy to segment sensor events under single and mixed activities scenarios. However, the average classification time taken to segment each sensor events have suffered from 3971ms and 62183ms for single and mixed activities scenarios, respectively. The second study to detect fine-grained-level user actions was evaluated with 30 and 153 fuzzy rules to detect two fine-grained movements with a pre-collected dataset from the real-time smart environment. The result of the second study indicate good average accuracy of 83.33% and 100% but with the high average duration of 24648ms and 105318ms, and posing further challenges for the scalability of fusion rule creations. The third study was evaluated by incorporating PR-OWL ontology with ADL ontologies and Semantic-Sensor-Network (SSN) ontology to define four types of uncertainties presented in the kitchen-based activity. The fourth study illustrated a case study to extended single-user AR to multi-user AR by combining RFID tags and fingerprint sensors discriminative sensors to identify and associate user actions with the aid of time-series analysis. The last study responds to the computations and performance requirements for the four studies by analysing and proposing microservices-based system architecture for AAL system. A future research investigation towards adopting fog/edge computing paradigms from cloud computing is discussed for higher availability, reduced network traffic/energy, cost, and creating a decentralised system.
As a result of the five studies, this thesis develops a knowledge-driven framework to estimate and recognise multi-user activities at fine-grained level user actions. This framework integrates three complementary ontologies to conceptualise factual, fuzzy and uncertainties in the environment/ADLs, time-series analysis and discriminative sensing environment. Moreover, a distributed software architecture, multimodal sensor-based hardware prototypes, and other supportive utility tools such as simulator and synthetic ADL data generator for the experimentation were developed to support the evaluation of the proposed approaches. The distributed system is platform-independent and currently supported by an Android mobile application and web-browser based client interfaces for retrieving information such as live sensor events and HAR results
- …