2,483 research outputs found
A Survey on Multi-Resident Activity Recognition in Smart Environments
Human activity recognition (HAR) is a rapidly growing field that utilizes
smart devices, sensors, and algorithms to automatically classify and identify
the actions of individuals within a given environment. These systems have a
wide range of applications, including assisting with caring tasks, increasing
security, and improving energy efficiency. However, there are several
challenges that must be addressed in order to effectively utilize HAR systems
in multi-resident environments. One of the key challenges is accurately
associating sensor observations with the identities of the individuals
involved, which can be particularly difficult when residents are engaging in
complex and collaborative activities. This paper provides a brief overview of
the design and implementation of HAR systems, including a summary of the
various data collection devices and approaches used for human activity
identification. It also reviews previous research on the use of these systems
in multi-resident environments and offers conclusions on the current state of
the art in the field.Comment: 16 pages, to appear in Evolution of Information, Communication and
Computing Systems (EICCS) Book Serie
A knowledge-based approach towards human activity recognition in smart environments
For many years it is known that the population of older persons is on the rise. A recent report estimates that globally, the share of the population aged 65 years or over is expected to increase from 9.3 percent in 2020 to around 16.0 percent in 2050 [1]. This point has been one of the main sources of motivation for active research in the domain of human
activity recognition in smart-homes. The ability to perform ADL without assistance from
other people can be considered as a reference for the estimation of the independent living
level of the older person. Conventionally, this has been assessed by health-care domain
experts via a qualitative evaluation of the ADL. Since this evaluation is qualitative, it can
vary based on the person being monitored and the caregiver\u2019s experience. A significant
amount of research work is implicitly or explicitly aimed at augmenting the health-care
domain expert\u2019s qualitative evaluation with quantitative data or knowledge obtained from
HAR. From a medical perspective, there is a lack of evidence about the technology readiness
level of smart home architectures supporting older persons by recognizing ADL [2]. We
hypothesize that this may be due to a lack of effective collaboration between smart-home
researchers/developers and health-care domain experts, especially when considering HAR.
We foresee an increase in HAR systems being developed in close collaboration with caregivers
and geriatricians to support their qualitative evaluation of ADL with explainable quantitative
outcomes of the HAR systems. This has been a motivation for the work in this thesis. The
recognition of human activities \u2013 in particular ADL \u2013 may not only be limited to support
the health and well-being of older people. It can be relevant to home users in general. For
instance, HAR could support digital assistants or companion robots to provide contextually
relevant and proactive support to the home users, whether young adults or old. This has also
been a motivation for the work in this thesis.
Given our motivations, namely, (i) facilitation of iterative development and ease in collaboration between HAR system researchers/developers and health-care domain experts in ADL,
and (ii) robust HAR that can support digital assistants or companion robots. There is a need
for the development of a HAR framework that at its core is modular and flexible to facilitate
an iterative development process [3], which is an integral part of collaborative work that involves develop-test-improve phases. At the same time, the framework should be intelligible
for the sake of enriched collaboration with health-care domain experts. Furthermore, it
should be scalable, online, and accurate for having robust HAR, which can enable many
smart-home applications. The goal of this thesis is to design and evaluate such a framework.
This thesis contributes to the domain of HAR in smart-homes. Particularly the contribution can be divided into three parts. The first contribution is Arianna+, a framework to develop
networks of ontologies - for knowledge representation and reasoning - that enables smart
homes to perform human activity recognition online. The second contribution is OWLOOP,
an API that supports the development of HAR system architectures based on Arianna+. It
enables the usage of Ontology Web Language (OWL) by the means of Object-Oriented
Programming (OOP). The third contribution is the evaluation and exploitation of Arianna+
using OWLOOP API. The exploitation of Arianna+ using OWLOOP API has resulted in four
HAR system implementations. The evaluations and results of these HAR systems emphasize
the novelty of Arianna+
Inferring Complex Activities for Context-aware Systems within Smart Environments
The rising ageing population worldwide and the prevalence of age-related conditions such as physical fragility, mental impairments and chronic diseases have significantly impacted the quality of life and caused a shortage of health and care services. Over-stretched healthcare providers are leading to a paradigm shift in public healthcare provisioning. Thus, Ambient Assisted Living (AAL) using Smart Homes (SH) technologies has been rigorously investigated to help address the aforementioned problems.
Human Activity Recognition (HAR) is a critical component in AAL systems which enables applications such as just-in-time assistance, behaviour analysis, anomalies detection and emergency notifications. This thesis is aimed at investigating challenges faced in accurately recognising Activities of Daily Living (ADLs) performed by single or multiple inhabitants within smart environments. Specifically, this thesis explores five complementary research challenges in HAR. The first study contributes to knowledge by developing a semantic-enabled data segmentation approach with user-preferences. The second study takes the segmented set of sensor data to investigate and recognise human ADLs at multi-granular action level; coarse- and fine-grained action level. At the coarse-grained actions level, semantic relationships between the sensor, object and ADLs are deduced, whereas, at fine-grained action level, object usage at the satisfactory threshold with the evidence fused from multimodal sensor data is leveraged to verify the intended actions. Moreover, due to imprecise/vague interpretations of multimodal sensors and data fusion challenges, fuzzy set theory and fuzzy web ontology language (fuzzy-OWL) are leveraged. The third study focuses on incorporating uncertainties caused in HAR due to factors such as technological failure, object malfunction, and human errors. Hence, existing studies uncertainty theories and approaches are analysed and based on the findings, probabilistic ontology (PR-OWL) based HAR approach is proposed. The fourth study extends the first three studies to distinguish activities conducted by more than one inhabitant in a shared smart environment with the use of discriminative sensor-based techniques and time-series pattern analysis. The final study investigates in a suitable system architecture with a real-time smart environment tailored to AAL system and proposes microservices architecture with sensor-based off-the-shelf and bespoke sensing methods.
The initial semantic-enabled data segmentation study was evaluated with 100% and 97.8% accuracy to segment sensor events under single and mixed activities scenarios. However, the average classification time taken to segment each sensor events have suffered from 3971ms and 62183ms for single and mixed activities scenarios, respectively. The second study to detect fine-grained-level user actions was evaluated with 30 and 153 fuzzy rules to detect two fine-grained movements with a pre-collected dataset from the real-time smart environment. The result of the second study indicate good average accuracy of 83.33% and 100% but with the high average duration of 24648ms and 105318ms, and posing further challenges for the scalability of fusion rule creations. The third study was evaluated by incorporating PR-OWL ontology with ADL ontologies and Semantic-Sensor-Network (SSN) ontology to define four types of uncertainties presented in the kitchen-based activity. The fourth study illustrated a case study to extended single-user AR to multi-user AR by combining RFID tags and fingerprint sensors discriminative sensors to identify and associate user actions with the aid of time-series analysis. The last study responds to the computations and performance requirements for the four studies by analysing and proposing microservices-based system architecture for AAL system. A future research investigation towards adopting fog/edge computing paradigms from cloud computing is discussed for higher availability, reduced network traffic/energy, cost, and creating a decentralised system.
As a result of the five studies, this thesis develops a knowledge-driven framework to estimate and recognise multi-user activities at fine-grained level user actions. This framework integrates three complementary ontologies to conceptualise factual, fuzzy and uncertainties in the environment/ADLs, time-series analysis and discriminative sensing environment. Moreover, a distributed software architecture, multimodal sensor-based hardware prototypes, and other supportive utility tools such as simulator and synthetic ADL data generator for the experimentation were developed to support the evaluation of the proposed approaches. The distributed system is platform-independent and currently supported by an Android mobile application and web-browser based client interfaces for retrieving information such as live sensor events and HAR results
Recommended from our members
Context-awareness for mobile sensing: a survey and future directions
The evolution of smartphones together with increasing computational power have empowered developers to create innovative context-aware applications for recognizing user related social and cognitive activities in any situation and at any location. The existence and awareness of the context provides the capability of being conscious of physical environments or situations around mobile device users. This allows network services to respond proactively and intelligently based on such awareness. The key idea behind context-aware applications is to encourage users to collect, analyze and share local sensory knowledge in the purpose for a large scale community use by creating a smart network. The desired network is capable of making autonomous logical decisions to actuate environmental objects, and also assist individuals. However, many open challenges remain, which are mostly arisen due to the middleware services provided in mobile devices have limited resources in terms of power, memory and bandwidth. Thus, it becomes critically important to study how the drawbacks can be elaborated and resolved, and at the same time better understand the opportunities for the research community to contribute to the context-awareness. To this end, this paper surveys the literature over the period of 1991-2014 from the emerging concepts to applications of context-awareness in mobile platforms by providing up-to-date research and future research directions. Moreover, it points out the challenges faced in this regard and enlighten them by proposing possible solutions
Neuro-Symbolic Approaches for Context-Aware Human Activity Recognition
Deep Learning models are a standard solution for sensor-based Human Activity
Recognition (HAR), but their deployment is often limited by labeled data
scarcity and models' opacity. Neuro-Symbolic AI (NeSy) provides an interesting
research direction to mitigate these issues by infusing knowledge about context
information into HAR deep learning classifiers. However, existing NeSy methods
for context-aware HAR require computationally expensive symbolic reasoners
during classification, making them less suitable for deployment on
resource-constrained devices (e.g., mobile devices). Additionally, NeSy
approaches for context-aware HAR have never been evaluated on in-the-wild
datasets, and their generalization capabilities in real-world scenarios are
questionable. In this work, we propose a novel approach based on a semantic
loss function that infuses knowledge constraints in the HAR model during the
training phase, avoiding symbolic reasoning during classification. Our results
on scripted and in-the-wild datasets show the impact of different semantic loss
functions in outperforming a purely data-driven model. We also compare our
solution with existing NeSy methods and analyze each approach's strengths and
weaknesses. Our semantic loss remains the only NeSy solution that can be
deployed as a single DNN without the need for symbolic reasoning modules,
reaching recognition rates close (and better in some cases) to existing
approaches
Machine learning techniques for sensor-based household activity recognition and forecasting
Thanks to the recent development of cheap and unobtrusive smart-home sensors, ambient assisted living tools promise to offer innovative solutions to support the users in carrying out their everyday activities in a smoother and more sustainable way. To be effective, these solutions need to constantly monitor and forecast the activities of daily living carried out by the inhabitants. The Machine Learning field has seen significant advancements in the development of new techniques, especially regarding deep learning algorithms. Such techniques can be successfully applied to household activity signal data to benefit the user in several applications.
This thesis therefore aims to produce a contribution that artificial intelligence can make in the field of activity recognition and energy consumption. The effective recognition of common actions or the use of high-consumption appliances would lead to user profiling, thus enabling the optimisation of energy consumption in favour of the user himself or the energy community in general. Avoiding wasting electricity and optimising its consumption is one of the main objectives of the community. This work is therefore intended as a forerunner for future studies that will allow, through the results in this thesis, the creation of increasingly intelligent systems capable of making the best use of the user's resources for everyday life actions.
Namely, this thesis focuses on signals from sensors installed in a house: data from position sensors, door sensors, smartphones or smart meters, and investigates the use of advanced machine learning algorithms to recognize and forecast inhabitant activities, including the use of appliances and the power consumption. The thesis is structured into four main chapters, each of which represents a contribution regarding Machine Learning or Deep Learning techniques for addressing challenges related to the aforementioned data from different sources.
The first contribution highlights the importance of exploiting dimensionality reduction techniques that can simplify a Machine Learning model and increase its efficiency by identifying and retaining only the most informative and predictive features for activity recognition. In more detail, it is presented an extensive experimental study involving several feature selection algorithms and multiple Human Activity Recognition benchmarks containing mobile sensor data.
In the second contribution, we propose a machine learning approach to forecast future energy consumption considering not only past consumption data, but also context data such as inhabitants’ actions and activities, use of household appliances, interaction with furniture and doors, and environmental data. We performed an experimental evaluation with real-world data acquired in an instrumented environment from a large user group.
Finally, the last two contributions address the Non-Intrusive-Load-Monitoring problem.
In one case, the aim is to identify the operating state (on/off) and the precise energy consumption of individual electrical loads, considering only the aggregate consumption of these loads as input. We use a Deep Learning method to disaggregate the low-frequency energy signal generated directly by the new generation smart meters being deployed in Italy, without the need for additional specific hardware.
In the other case, driven by the need to build intelligent non-intrusive algorithms for disaggregating electrical signals, the work aims to recognize which appliance is activated by analyzing energy measurements and classifying appliances through Machine Learning techniques. Namely, we present a new way of approaching the problem by unifying Single Label (single active appliance recognition) and Multi Label (multiple active appliance recognition) learning paradigms. This combined approach, supplemented with an event detector, which suggests the instants of activation, would allow the development of an end-to-end NILM approach
Unsupervised Human Activity Recognition Using the Clustering Approach: A Review
Currently, many applications have emerged from the implementation of softwaredevelopment and hardware use, known as the Internet of things. One of the most importantapplication areas of this type of technology is in health care. Various applications arise daily inorder to improve the quality of life and to promote an improvement in the treatments of patients athome that suffer from different pathologies. That is why there has emerged a line of work of greatinterest, focused on the study and analysis of daily life activities, on the use of different data analysistechniques to identify and to help manage this type of patient. This article shows the result of thesystematic review of the literature on the use of the Clustering method, which is one of the mostused techniques in the analysis of unsupervised data applied to activities of daily living, as well asthe description of variables of high importance as a year of publication, type of article, most usedalgorithms, types of dataset used, and metrics implemented. These data will allow the reader tolocate the recent results of the application of this technique to a particular area of knowledg
Human Action Recognition with RGB-D Sensors
none3noHuman action recognition, also known as HAR, is at the foundation of many different applications related to behavioral analysis, surveillance, and safety, thus it has been a very active research area in the last years. The release of inexpensive RGB-D sensors fostered researchers working in this field because depth data simplify the processing of visual data that could be otherwise difficult using classic RGB devices. Furthermore, the availability of depth data allows to implement solutions that are unobtrusive and privacy preserving with respect to classic video-based analysis. In this scenario, the aim of this chapter is to review the most salient techniques for HAR based on depth signal processing, providing some details on a specific method based on temporal pyramid of key poses, evaluated on the well-known MSR Action3D dataset.Cippitelli, Enea; Gambi, Ennio; Spinsante, SusannaCippitelli, Enea; Gambi, Ennio; Spinsante, Susann
- …