9 research outputs found

    Activity Recognition using Hierarchical Hidden Markov Models on Streaming Sensor Data

    Full text link
    Activity recognition from sensor data deals with various challenges, such as overlapping activities, activity labeling, and activity detection. Although each challenge in the field of recognition has great importance, the most important one refers to online activity recognition. The present study tries to use online hierarchical hidden Markov model to detect an activity on the stream of sensor data which can predict the activity in the environment with any sensor event. The activity recognition samples were labeled by the statistical features such as the duration of activity. The results of our proposed method test on two different datasets of smart homes in the real world showed that one dataset has improved 4% and reached (59%) while the results reached 64.6% for the other data by using the best methods

    An unsupervised behavioral modeling and alerting system based on passive sensing for elderly care

    Get PDF
    Artificial Intelligence in combination with the Internet of Medical Things enables remote healthcare services through networks of environmental and/or personal sensors. We present a remote healthcare service system which collects real-life data through an environmental sensor package, including binary motion, contact, pressure, and proximity sensors, installed at households of elderly people. Its aim is to keep the caregivers informed of subjects’ health-status progressive trajectory, and alert them of health-related anomalies to enable objective on-demand healthcare service delivery at scale. The system was deployed in 19 households inhabited by an elderly person with post-stroke condition in the Emilia–Romagna region in Italy, with maximal and median observation durations of 98 and 55 weeks. Among these households, 17 were multi-occupancy residences, while the other 2 housed elderly patients living alone. Subjects’ daily behavioral diaries were extracted and registered from raw sensor signals, using rule-based data pre-processing and unsupervised algorithms. Personal behavioral habits were identified and compared to typical patterns reported in behavioral science, as a quality-of-life indicator. We consider the activity patterns extracted across all users as a dictionary, and represent each patient’s behavior as a ‘Bag of Words’, based on which patients can be categorized into sub-groups for precision cohort treatment. Longitudinal trends of the behavioral progressive trajectory and sudden abnormalities of a patient were detected and reported to care providers. Due to the sparse sensor setting and the multi-occupancy living condition, the sleep profile was used as the main indicator in our system. Experimental results demonstrate the ability to report on subjects’ daily activity pattern in terms of sleep, outing, visiting, and health-status trajectories, as well as predicting/detecting 75% hospitalization sessions up to 11 days in advance. 65% of the alerts were confirmed to be semantically meaningful by the users. Furthermore, reduced social interaction (outing and visiting), and lower sleep quality could be observed during the COVID-19 lockdown period across the cohort

    Anomaly detection in elderly daily behavior in ambient sensing environments

    Get PDF
    Current ubiquitous computing applications for smart homes aim to enhance people’s daily living respecting age span. Among the target groups of people, elderly are a population eager for “choices for living arrangements”, which would allow them to continue living in their homes but at the same time provide the health care they need. Given the growing elderly population, there is a need for statistical models able to capture the recurring patterns of daily activity life and reason based on this information. We present an analysis of real-life sensor data collected from 40 different households of elderly people, using motion, door and pressure sensors. Our objective is to automatically observe and model the daily behavior of the elderly and detect anomalies that could occur in the sensor data. For this purpose, we first introduce an abstraction layer to create a common ground for home sensor configurations. Next, we build a probabilistic spatio-temporal model to summarize daily behavior. Anomalies are then defined as significant changes from the learned behavioral model and detected using a cross-entropy measure. We have compared the detected anomalies with manually collected annotations and the results show that the presented approach is able to detect significant behavioral changes of the elderly

    An overview of data fusion techniques for internet of things enabled physical activity recognition and measure

    Get PDF
    Due to importantly beneficial effects on physical and mental health and strong association with many rehabilitation programs, Physical Activity Recognition and Measure (PARM) has been widely recognised as a key paradigm for a variety of smart healthcare applications. Traditional methods for PARM relies on designing and utilising Data fusion or machine learning techniques in processing ambient and wearable sensing data for classifying types of physical activity and removing their uncertainties. Yet they mostly focus on controlled environments with the aim of increasing types of identifiable activity subjects, improved recognition accuracy and measure robustness. The emergence of the Internet of Things (IoT) enabling technology is transferring PARM studies to an open and dynamic uncontrolled ecosystem by connecting heterogeneous cost-effective wearable devices and mobile apps and various groups of users. Little is currently known about whether traditional Data fusion techniques can tackle new challenges of IoT environments and how to effectively harness and improve these technologies. In an effort to understand potential use and opportunities of Data fusion techniques in IoT enabled PARM applications, this paper will give a systematic review, critically examining PARM studies from a perspective of a novel 3D dynamic IoT based physical activity collection and validation model. It summarized traditional state-of-the-art data fusion techniques from three plane domains in the 3D dynamic IoT model: devices, persons and timeline. The paper goes on to identify some new research trends and challenges of data fusion techniques in the IoT enabled PARM studies, and discusses some key enabling techniques for tackling them

    An Ontology-Based Hybrid Approach to Activity Modeling for Smart Homes

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Activity models play a critical role for activity recognition and assistance in ambient assisted living. Existing approaches to activity modeling suffer from a number of problems, e.g., cold-start, model reusability, and incompleteness. In an effort to address these problems, we introduce an ontology-based hybrid approach to activity modeling that combines domain knowledge based model specification and data-driven model learning. Central to the approach is an iterative process that begins with “seed” activity models created by ontological engineering. The “seed” models are deployed, and subsequently evolved through incremental activity discovery and model update. While our previous work has detailed ontological activity modeling and activity recognition, this paper focuses on the systematic hybrid approach and associated methods and inference rules for learning new activities and user activity profiles. The approach has been implemented in a feature-rich assistive living system. Analysis of the experiments conducted has been undertaken in an effort to test and evaluate the activity learning algorithms and associated mechanisms

    Inferring Complex Activities for Context-aware Systems within Smart Environments

    Get PDF
    The rising ageing population worldwide and the prevalence of age-related conditions such as physical fragility, mental impairments and chronic diseases have significantly impacted the quality of life and caused a shortage of health and care services. Over-stretched healthcare providers are leading to a paradigm shift in public healthcare provisioning. Thus, Ambient Assisted Living (AAL) using Smart Homes (SH) technologies has been rigorously investigated to help address the aforementioned problems. Human Activity Recognition (HAR) is a critical component in AAL systems which enables applications such as just-in-time assistance, behaviour analysis, anomalies detection and emergency notifications. This thesis is aimed at investigating challenges faced in accurately recognising Activities of Daily Living (ADLs) performed by single or multiple inhabitants within smart environments. Specifically, this thesis explores five complementary research challenges in HAR. The first study contributes to knowledge by developing a semantic-enabled data segmentation approach with user-preferences. The second study takes the segmented set of sensor data to investigate and recognise human ADLs at multi-granular action level; coarse- and fine-grained action level. At the coarse-grained actions level, semantic relationships between the sensor, object and ADLs are deduced, whereas, at fine-grained action level, object usage at the satisfactory threshold with the evidence fused from multimodal sensor data is leveraged to verify the intended actions. Moreover, due to imprecise/vague interpretations of multimodal sensors and data fusion challenges, fuzzy set theory and fuzzy web ontology language (fuzzy-OWL) are leveraged. The third study focuses on incorporating uncertainties caused in HAR due to factors such as technological failure, object malfunction, and human errors. Hence, existing studies uncertainty theories and approaches are analysed and based on the findings, probabilistic ontology (PR-OWL) based HAR approach is proposed. The fourth study extends the first three studies to distinguish activities conducted by more than one inhabitant in a shared smart environment with the use of discriminative sensor-based techniques and time-series pattern analysis. The final study investigates in a suitable system architecture with a real-time smart environment tailored to AAL system and proposes microservices architecture with sensor-based off-the-shelf and bespoke sensing methods. The initial semantic-enabled data segmentation study was evaluated with 100% and 97.8% accuracy to segment sensor events under single and mixed activities scenarios. However, the average classification time taken to segment each sensor events have suffered from 3971ms and 62183ms for single and mixed activities scenarios, respectively. The second study to detect fine-grained-level user actions was evaluated with 30 and 153 fuzzy rules to detect two fine-grained movements with a pre-collected dataset from the real-time smart environment. The result of the second study indicate good average accuracy of 83.33% and 100% but with the high average duration of 24648ms and 105318ms, and posing further challenges for the scalability of fusion rule creations. The third study was evaluated by incorporating PR-OWL ontology with ADL ontologies and Semantic-Sensor-Network (SSN) ontology to define four types of uncertainties presented in the kitchen-based activity. The fourth study illustrated a case study to extended single-user AR to multi-user AR by combining RFID tags and fingerprint sensors discriminative sensors to identify and associate user actions with the aid of time-series analysis. The last study responds to the computations and performance requirements for the four studies by analysing and proposing microservices-based system architecture for AAL system. A future research investigation towards adopting fog/edge computing paradigms from cloud computing is discussed for higher availability, reduced network traffic/energy, cost, and creating a decentralised system. As a result of the five studies, this thesis develops a knowledge-driven framework to estimate and recognise multi-user activities at fine-grained level user actions. This framework integrates three complementary ontologies to conceptualise factual, fuzzy and uncertainties in the environment/ADLs, time-series analysis and discriminative sensing environment. Moreover, a distributed software architecture, multimodal sensor-based hardware prototypes, and other supportive utility tools such as simulator and synthetic ADL data generator for the experimentation were developed to support the evaluation of the proposed approaches. The distributed system is platform-independent and currently supported by an Android mobile application and web-browser based client interfaces for retrieving information such as live sensor events and HAR results

    Context-aware home monitoring system for Parkinson's disease patietns : ambient and werable sensing for freezing of gait detection

    Get PDF
    Tesi en modalitat de cotutela: Universitat Politècnica de Catalunya i Technische Universiteit Eindhoven. This PhD Thesis has been developed in the framework of, and according to, the rules of the Erasmus Mundus Joint Doctorate on Interactive and Cognitive Environments EMJD ICE [FPA no. 2010-0012]Parkinson’s disease (PD). It is characterized by brief episodes of inability to step, or by extremely short steps that typically occur on gait initiation or on turning while walking. The consequences of FOG are aggravated mobility and higher affinity to falls, which have a direct effect on the quality of life of the individual. There does not exist completely effective pharmacological treatment for the FOG phenomena. However, external stimuli, such as lines on the floor or rhythmic sounds, can focus the attention of a person who experiences a FOG episode and help her initiate gait. The optimal effectiveness in such approach, known as cueing, is achieved through timely activation of a cueing device upon the accurate detection of a FOG episode. Therefore, a robust and accurate FOG detection is the main problem that needs to be solved when developing a suitable assistive technology solution for this specific user group. This thesis proposes the use of activity and spatial context of a person as the means to improve the detection of FOG episodes during monitoring at home. The thesis describes design, algorithm implementation and evaluation of a distributed home system for FOG detection based on multiple cameras and a single inertial gait sensor worn at the waist of the patient. Through detailed observation of collected home data of 17 PD patients, we realized that a novel solution for FOG detection could be achieved by using contextual information of the patient’s position, orientation, basic posture and movement on a semantically annotated two-dimensional (2D) map of the indoor environment. We envisioned the future context-aware system as a network of Microsoft Kinect cameras placed in the patient’s home that interacts with a wearable inertial sensor on the patient (smartphone). Since the hardware platform of the system constitutes from the commercial of-the-shelf hardware, the majority of the system development efforts involved the production of software modules (for position tracking, orientation tracking, activity recognition) that run on top of the middle-ware operating system in the home gateway server. The main component of the system that had to be developed is the Kinect application for tracking the position and height of multiple people, based on the input in the form of 3D point cloud data. Besides position tracking, this software module also provides mapping and semantic annotation of FOG specific zones on the scene in front of the Kinect. One instance of vision tracking application is supposed to run for every Kinect sensor in the system, yielding potentially high number of simultaneous tracks. At any moment, the system has to track one specific person - the patient. To enable tracking of the patient between different non-overlapped cameras in the distributed system, a new re-identification approach based on appearance model learning with one-class Support Vector Machine (SVM) was developed. Evaluation of the re-identification method was conducted on a 16 people dataset in a laboratory environment. Since the patient orientation in the indoor space was recognized as an important part of the context, the system necessitated the ability to estimate the orientation of the person, expressed in the frame of the 2D scene on which the patient is tracked by the camera. We devised method to fuse position tracking information from the vision system and inertial data from the smartphone in order to obtain patient’s 2D pose estimation on the scene map. Additionally, a method for the estimation of the position of the smartphone on the waist of the patient was proposed. Position and orientation estimation accuracy were evaluated on a 12 people dataset. Finally, having available positional, orientation and height information, a new seven-class activity classification was realized using a hierarchical classifier that combines height-based posture classifier with translational and rotational SVM movement classifiers. Each of the SVM movement classifiers and the joint hierarchical classifier were evaluated in the laboratory experiment with 8 healthy persons. The final context-based FOG detection algorithm uses activity information and spatial context information in order to confirm or disprove FOG detected by the current state-of-the-art FOG detection algorithm (which uses only wearable sensor data). A dataset with home data of 3 PD patients was produced using two Kinect cameras and a smartphone in synchronized recording. The new context-based FOG detection algorithm and the wearable-only FOG detection algorithm were both evaluated with the home dataset and their results were compared. The context-based algorithm very positively influences the reduction of false positive detections, which is expressed through achieved higher specificity. In some cases, context-based algorithm also eliminates true positive detections, reducing sensitivity to the lesser extent. The final comparison of the two algorithms on the basis of their sensitivity and specificity, shows the improvement in the overall FOG detection achieved with the new context-aware home system.Esta tesis propone el uso de la actividad y el contexto espacial de una persona como medio para mejorar la detección de episodios de FOG (Freezing of gait) durante el seguimiento en el domicilio. La tesis describe el diseño, implementación de algoritmos y evaluación de un sistema doméstico distribuido para detección de FOG basado en varias cámaras y un único sensor de marcha inercial en la cintura del paciente. Mediante de la observación detallada de los datos caseros recopilados de 17 pacientes con EP, nos dimos cuenta de que se puede lograr una solución novedosa para la detección de FOG mediante el uso de información contextual de la posición del paciente, orientación, postura básica y movimiento anotada semánticamente en un mapa bidimensional (2D) del entorno interior. Imaginamos el futuro sistema de consciencia del contexto como una red de cámaras Microsoft Kinect colocadas en el hogar del paciente, que interactúa con un sensor de inercia portátil en el paciente (teléfono inteligente). Al constituirse la plataforma del sistema a partir de hardware comercial disponible, los esfuerzos de desarrollo consistieron en la producción de módulos de software (para el seguimiento de la posición, orientación seguimiento, reconocimiento de actividad) que se ejecutan en la parte superior del sistema operativo del servidor de puerta de enlace de casa. El componente principal del sistema que tuvo que desarrollarse es la aplicación Kinect para seguimiento de la posición y la altura de varias personas, según la entrada en forma de punto 3D de datos en la nube. Además del seguimiento de posición, este módulo de software también proporciona mapeo y semántica. anotación de zonas específicas de FOG en la escena frente al Kinect. Se supone que una instancia de la aplicación de seguimiento de visión se ejecuta para cada sensor Kinect en el sistema, produciendo un número potencialmente alto de pistas simultáneas. En cualquier momento, el sistema tiene que rastrear a una persona específica - el paciente. Para habilitar el seguimiento del paciente entre diferentes cámaras no superpuestas en el sistema distribuido, se desarrolló un nuevo enfoque de re-identificación basado en el aprendizaje de modelos de apariencia con one-class Suport Vector Machine (SVM). La evaluación del método de re-identificación se realizó con un conjunto de datos de 16 personas en un entorno de laboratorio. Dado que la orientación del paciente en el espacio interior fue reconocida como una parte importante del contexto, el sistema necesitaba la capacidad de estimar la orientación de la persona, expresada en el marco de la escena 2D en la que la cámara sigue al paciente. Diseñamos un método para fusionar la información de seguimiento de posición del sistema de visión y los datos de inercia del smartphone para obtener la estimación de postura 2D del paciente en el mapa de la escena. Además, se propuso un método para la estimación de la posición del Smartphone en la cintura del paciente. La precisión de la estimación de la posición y la orientación se evaluó en un conjunto de datos de 12 personas. Finalmente, al tener disponible información de posición, orientación y altura, se realizó una nueva clasificación de actividad de seven-class utilizando un clasificador jerárquico que combina un clasificador de postura basado en la altura con clasificadores de movimiento SVM traslacional y rotacional. Cada uno de los clasificadores de movimiento SVM y el clasificador jerárquico conjunto se evaluaron en el experimento de laboratorio con 8 personas sanas. El último algoritmo de detección de FOG basado en el contexto utiliza información de actividad e información de texto espacial para confirmar o refutar el FOG detectado por el algoritmo de detección de FOG actual. El algoritmo basado en el contexto influye muy positivamente en la reducción de las detecciones de falsos positivos, que se expresa a través de una mayor especificidadPostprint (published version

    Context-aware home monitoring system for Parkinson's disease patietns : ambient and werable sensing for freezing of gait detection

    Get PDF
    Parkinson’s disease (PD). It is characterized by brief episodes of inability to step, or by extremely short steps that typically occur on gait initiation or on turning while walking. The consequences of FOG are aggravated mobility and higher affinity to falls, which have a direct effect on the quality of life of the individual. There does not exist completely effective pharmacological treatment for the FOG phenomena. However, external stimuli, such as lines on the floor or rhythmic sounds, can focus the attention of a person who experiences a FOG episode and help her initiate gait. The optimal effectiveness in such approach, known as cueing, is achieved through timely activation of a cueing device upon the accurate detection of a FOG episode. Therefore, a robust and accurate FOG detection is the main problem that needs to be solved when developing a suitable assistive technology solution for this specific user group. This thesis proposes the use of activity and spatial context of a person as the means to improve the detection of FOG episodes during monitoring at home. The thesis describes design, algorithm implementation and evaluation of a distributed home system for FOG detection based on multiple cameras and a single inertial gait sensor worn at the waist of the patient. Through detailed observation of collected home data of 17 PD patients, we realized that a novel solution for FOG detection could be achieved by using contextual information of the patient’s position, orientation, basic posture and movement on a semantically annotated two-dimensional (2D) map of the indoor environment. We envisioned the future context-aware system as a network of Microsoft Kinect cameras placed in the patient’s home that interacts with a wearable inertial sensor on the patient (smartphone). Since the hardware platform of the system constitutes from the commercial of-the-shelf hardware, the majority of the system development efforts involved the production of software modules (for position tracking, orientation tracking, activity recognition) that run on top of the middle-ware operating system in the home gateway server. The main component of the system that had to be developed is the Kinect application for tracking the position and height of multiple people, based on the input in the form of 3D point cloud data. Besides position tracking, this software module also provides mapping and semantic annotation of FOG specific zones on the scene in front of the Kinect. One instance of vision tracking application is supposed to run for every Kinect sensor in the system, yielding potentially high number of simultaneous tracks. At any moment, the system has to track one specific person - the patient. To enable tracking of the patient between different non-overlapped cameras in the distributed system, a new re-identification approach based on appearance model learning with one-class Support Vector Machine (SVM) was developed. Evaluation of the re-identification method was conducted on a 16 people dataset in a laboratory environment. Since the patient orientation in the indoor space was recognized as an important part of the context, the system necessitated the ability to estimate the orientation of the person, expressed in the frame of the 2D scene on which the patient is tracked by the camera. We devised method to fuse position tracking information from the vision system and inertial data from the smartphone in order to obtain patient’s 2D pose estimation on the scene map. Additionally, a method for the estimation of the position of the smartphone on the waist of the patient was proposed. Position and orientation estimation accuracy were evaluated on a 12 people dataset. Finally, having available positional, orientation and height information, a new seven-class activity classification was realized using a hierarchical classifier that combines height-based posture classifier with translational and rotational SVM movement classifiers. Each of the SVM movement classifiers and the joint hierarchical classifier were evaluated in the laboratory experiment with 8 healthy persons. The final context-based FOG detection algorithm uses activity information and spatial context information in order to confirm or disprove FOG detected by the current state-of-the-art FOG detection algorithm (which uses only wearable sensor data). A dataset with home data of 3 PD patients was produced using two Kinect cameras and a smartphone in synchronized recording. The new context-based FOG detection algorithm and the wearable-only FOG detection algorithm were both evaluated with the home dataset and their results were compared. The context-based algorithm very positively influences the reduction of false positive detections, which is expressed through achieved higher specificity. In some cases, context-based algorithm also eliminates true positive detections, reducing sensitivity to the lesser extent. The final comparison of the two algorithms on the basis of their sensitivity and specificity, shows the improvement in the overall FOG detection achieved with the new context-aware home system.Esta tesis propone el uso de la actividad y el contexto espacial de una persona como medio para mejorar la detección de episodios de FOG (Freezing of gait) durante el seguimiento en el domicilio. La tesis describe el diseño, implementación de algoritmos y evaluación de un sistema doméstico distribuido para detección de FOG basado en varias cámaras y un único sensor de marcha inercial en la cintura del paciente. Mediante de la observación detallada de los datos caseros recopilados de 17 pacientes con EP, nos dimos cuenta de que se puede lograr una solución novedosa para la detección de FOG mediante el uso de información contextual de la posición del paciente, orientación, postura básica y movimiento anotada semánticamente en un mapa bidimensional (2D) del entorno interior. Imaginamos el futuro sistema de consciencia del contexto como una red de cámaras Microsoft Kinect colocadas en el hogar del paciente, que interactúa con un sensor de inercia portátil en el paciente (teléfono inteligente). Al constituirse la plataforma del sistema a partir de hardware comercial disponible, los esfuerzos de desarrollo consistieron en la producción de módulos de software (para el seguimiento de la posición, orientación seguimiento, reconocimiento de actividad) que se ejecutan en la parte superior del sistema operativo del servidor de puerta de enlace de casa. El componente principal del sistema que tuvo que desarrollarse es la aplicación Kinect para seguimiento de la posición y la altura de varias personas, según la entrada en forma de punto 3D de datos en la nube. Además del seguimiento de posición, este módulo de software también proporciona mapeo y semántica. anotación de zonas específicas de FOG en la escena frente al Kinect. Se supone que una instancia de la aplicación de seguimiento de visión se ejecuta para cada sensor Kinect en el sistema, produciendo un número potencialmente alto de pistas simultáneas. En cualquier momento, el sistema tiene que rastrear a una persona específica - el paciente. Para habilitar el seguimiento del paciente entre diferentes cámaras no superpuestas en el sistema distribuido, se desarrolló un nuevo enfoque de re-identificación basado en el aprendizaje de modelos de apariencia con one-class Suport Vector Machine (SVM). La evaluación del método de re-identificación se realizó con un conjunto de datos de 16 personas en un entorno de laboratorio. Dado que la orientación del paciente en el espacio interior fue reconocida como una parte importante del contexto, el sistema necesitaba la capacidad de estimar la orientación de la persona, expresada en el marco de la escena 2D en la que la cámara sigue al paciente. Diseñamos un método para fusionar la información de seguimiento de posición del sistema de visión y los datos de inercia del smartphone para obtener la estimación de postura 2D del paciente en el mapa de la escena. Además, se propuso un método para la estimación de la posición del Smartphone en la cintura del paciente. La precisión de la estimación de la posición y la orientación se evaluó en un conjunto de datos de 12 personas. Finalmente, al tener disponible información de posición, orientación y altura, se realizó una nueva clasificación de actividad de seven-class utilizando un clasificador jerárquico que combina un clasificador de postura basado en la altura con clasificadores de movimiento SVM traslacional y rotacional. Cada uno de los clasificadores de movimiento SVM y el clasificador jerárquico conjunto se evaluaron en el experimento de laboratorio con 8 personas sanas. El último algoritmo de detección de FOG basado en el contexto utiliza información de actividad e información de texto espacial para confirmar o refutar el FOG detectado por el algoritmo de detección de FOG actual. El algoritmo basado en el contexto influye muy positivamente en la reducción de las detecciones de falsos positivos, que se expresa a través de una mayor especificida

    Aktivitätserkennung in Privathaushalten auf Basis eines unüberwachten Lernalgorithmus

    Get PDF
    In diesem Buch wurde eine Übersicht und kritische Zusammenfassung des derzeitigen Forschungsstandes zu Human Activity Recognition (HAR), zu Deutsch Aktivitätserkennung bei Menschen, durchgeführt. Dabei ergab sich eine Forschungslücke im Rahmen der nicht überwachten Lernalgorithmen für HAR-Systeme. Für überwachte Lernalgorithmen muss je Anwendung ein annotierter Datensatz über mehrere Wochen mühselig erstellt werden, bevor das HAR-System zum Einsatz kommen kann. Dies entfällt mit dem neuen HAR-System. Des Weiteren ist das neue System in der Lage auch parallel laufende Aktivitäten des täglichen Lebens (ADL, aus dem Englischen Activity of Daily Living) zu erkennen. Viele HAR-Systeme aus dem aktuellen Stand der Forschung sind dazu nicht in der Lage, da sie z.B. sequenziell arbeiten. Beide Probleme wurden mit dem neuen HAR-System erfolgreich gelöst. Das in diesem Buch vorgestellte HAR-System ist eine neuartige Kombination aus einem stochastischen Modell und einem kognitiven Ansatz. Das HAR-System wird in drei Phasen angewandt. Die erste Phase ist die sogenannte Initialphase. In dieser ersten Phase wird a priori Wissen gesammelt. Das neue HAR-System benötigt im Gegensatz zu den Systemen der aktuellen Forschung nur sehr wenig a priori Wissen. Es wird die Art und Anzahl der Sensoren und der ADL benötigt, welche in eine sinnfällige initiale Verbindung miteinander gebracht werden. Diese Verbindung ist eine vorläufige und gleichverteilte Initialbelegung der Sensor-ADL-Beziehung, die in der Lernphase individuell an die jeweilige Person und Anwendungsfall angepasst wird. Es wird ein neuartiges Markov Modell (MM) und ein neu entwickeltes Impulsmodell (IM) erlernt. Das genutzte MM unterscheidet sich von den aktuellen MM durch dessen Zustandsdefinitionen, die die Sensorereigniskombinationen abbilden, wodurch das Segmentierungsproblem wegfällt. Dadurch können auch wichtige Strukturen aus dem MM extrahiert werden, die das menschliche Verhalten darstellen. Diese Strukturen werden durch neuartige Modellvergleiche bewertet. Das Resultat dieser Bewertung wird wiederum in Kombination mit dem neuen kognitiven IM in einem speziell dafür entwickelten iterativen Ansatz verwendet, um die initiale Sensor-ADL-Beziehung zu individualisieren. Diese neue Sensor-ADL-Beziehung ist Grundlage für die dritte und letzte Phase: der Anwendungsphase. Im IM wird die Sensor-ADL-Beziehung in Kombination mit neu entwickelten Regeln angewandt, um eine finale ADL-Wahrscheinlichkeitsverteilung der erkannten ADL zu berechnen. Diese besagt, welches ADL derzeit am wahrscheinlichsten ausgeführt wird und welche ADL gerade parallel zu anderen ADL ausgeführt werden. Das neue HAR-System wurde mit drei Datensätzen unterschiedlichen Anspruchs und einem Benchmark getestet. Dieser Benchmark beinhaltete vier verschiedene stochastische Modelle des aktuellen Stands der Forschung. Das neue HAR-System ist in der Lage eine höhere Erkennungsrate als der Benchmark zu leisten und war im Durchschnitt 3,2% akkurater. Es erzielte eine 95-97%-ige Wiedererkennung der ADL. Durch die erstellten Konfusionsmatrizen ergab sich eine durchschnittliche Verbesserung von 42% in den Metriken für Sensitivität, Wirksamkeit und F-Maß. Ein weiterer großer Unterschied zum Benchmark ist, dass das neue HAR-System unüberwacht lernt. Dadurch fällt die Datenakquise im Vergleich zum Benchmark sehr gering aus und das neue HAR-System wirkt attraktiver für den Markt in dessen Anwendbarkeit.:1 Einführung in die Domäne 1 1.1 Motivation 1 1.2 Ambient Assistive Living 2 1.2.1 Menschliches Verhalten und technische Assistenz 4 1.2.2 Aktivitäten des täglichen Lebens 5 1.3 Zielstellung und Abgrenzung 6 2 AAL-Systeme 9 2.1 Human Activity Recognition 9 2.1.1 Sensorik 11 2.1.2 Lernansatz 13 2.1.3 Ereignisstromanalyse 16 2.1.4 Abbildungsgrad menschlichen Verhaltens 18 2.1.5 Fazit und Anforderungen 19 3 Human Activity Recognition - Modelle 21 3.1 Datenbasierte Modelle 23 3.1.1 Deterministische Modelle 23 3.1.2 Stochastische Modelle 24 3.1.2.1 Bayes'sche Netze 25 3.1.2.2 Hidden Markov Modelle 26 3.1.2.3 Conditional Random Field 29 3.1.2.4 Neuronale Netze 30 3.1.2.5 Support Vector Machines 32 3.1.3 Vergleich und Fazit 34 3.2 Wissensbasierte Modelle 40 3.2.1 Datenbanken 40 3.2.2 Ontologien 41 3.2.3 Wahrscheinlichkeitsbasierte Ontologien 44 3.2.4 Fazit 45 3.3 Anforderungen und Forschungsfragen 46 3.4 Forschungsnaher Stand der Technik 49 3.4.1 Unüberwachter Klassifizierungsalgorithmus 49 3.4.2 Unüberwachtes Lernen mittels einer Ontologie 50 3.4.3 Wahrscheinlichkeitsbasierte Ontologie 51 3.4.4 Fazit 53 4 Lösungskonzept 57 4.1 Sensordatenschnittstelle und Definitionen 60 4.2 Initialphase 62 4.3 Lernphase 63 4.3.1 Markov Modell und menschliche Angewohnheiten 63 4.3.1.1 Das Markov Modell 63 4.3.1.2 Erlernen des Markov Modells 64 4.3.1.3 Lernen menschlicher Angewohnheiten 69 4.3.1.4 Schlussfolgerung 73 4.3.2 Abbildung des menschlichen Erinnerungsvermögen 74 4.3.2.1 Menschliches Lernen und Vergessen 74 4.3.2.2 Impulsmodell 75 4.3.2.3 Bestimmung der ADL für MM-Strukturen 78 4.3.2.4 Erlernen der Relevanzfaktoren 79 4.3.2.5 Schlussfolgerung 85 4.4 Anwendungsphase 85 4.4.1 Impulsmodell in der Anwendungsphase 85 4.4.2 ADL Erkennung 86 4.4.3 Wahrscheinlichkeitsverteilung der ADL 88 4.4.4 Schlussfolgerung und Zusammenfassung 90 5 Bewertung der Lösung 91 5.1 Testszenarien und Datensets 91 5.2 Datensätze des Benchmarks 93 5.3 Evaluierung 95 5.4 Komplexität der Lösung 107 5.5 Einschätzung der Vor- und Nachteile 108 6 Zusammenfassung und Ausblick 115 6.1 Zusammenfassung 115 6.2 Ausblick und Weiterentwicklung 116 A Fallbeispiel 119 B Evaluierung 123 B.1 Ergebnisse des Benchmarks 123 B.2 Datenschnittstelle 126 B.3 Markov Modell 129 B.4 Strukturen und deren Signifikanz 135 B.5 Ergebnisse der Relevanzfaktorenberechnung 138 B.6 ADL-Wahrscheinlichkeitsverteilung der Lösung 138 Tabellenverzeichnis 141 Abbildungsverzeichnis 143 Literaturverzeichnis 14
    corecore