8 research outputs found

    Group Activity Recognition Using Wearable Sensing Devices

    Get PDF
    Understanding behavior of groups in real time can help prevent tragedy in crowd emergencies. Wearable devices allow sensing of human behavior, but the infrastructure required to communicate data is often the first casualty in emergency situations. Peer-to-peer (P2P) methods for recognizing group behavior are necessary, but the behavior of the group cannot be observed at any single location. The contribution is the methods required for recognition of group behavior using only wearable devices

    Intelligent ultrasound hand gesture recognition system

    Get PDF
    With the booming development of technology, hand gesture recognition has become a hotspot in Human-Computer Interaction (HCI) systems. Ultrasound hand gesture recognition is an innovative method that has attracted ample interest due to its strong real-time performance, low cost, large field of view, and illumination independence. Well-investigated HCI applications include external digital pens, game controllers on smart mobile devices, and web browser control on laptops. This thesis probes gesture recognition systems on multiple platforms to study the behavior of system performance with various gesture features. Focused on this topic, the contributions of this thesis can be summarized from the perspectives of smartphone acoustic field and hand model simulation, real-time gesture recognition on smart devices with speed categorization algorithm, fast reaction gesture recognition based on temporal neural networks, and angle of arrival-based gesture recognition system. Firstly, a novel pressure-acoustic simulation model is developed to examine its potential for use in acoustic gesture recognition. The simulation model is creating a new system for acoustic verification, which uses simulations mimicking real-world sound elements to replicate a sound pressure environment as authentically as possible. This system is fine-tuned through sensitivity tests within the simulation and validate with real-world measurements. Following this, the study constructs novel simulations for acoustic applications, informed by the verified acoustic field distribution, to assess their effectiveness in specific devices. Furthermore, a simulation focused on understanding the effects of the placement of sound devices and hand-reflected sound waves is properly designed. Moreover, a feasibility test on phase control modification is conducted, revealing the practical applications and boundaries of this model. Mobility and system accuracy are two significant factors that determine gesture recognition performance. As smartphones have high-quality acoustic devices for developing gesture recognition, to achieve a portable gesture recognition system with high accuracy, novel algorithms were developed to distinguish gestures using smartphone built-in speakers and microphones. The proposed system adopts Short-Time-Fourier-Transform (STFT) and machine learning to capture hand movement and determine gestures by the pretrained neural network. To differentiate gesture speeds, a specific neural network was designed and set as part of the classification algorithm. The final accuracy rate achieves 96% among nine gestures and three speed levels. The proposed algorithms were evaluated comparatively through algorithm comparison, and the accuracy outperformed state-of-the-art systems. Furthermore, a fast reaction gesture recognition based on temporal neural networks was designed. Traditional ultrasound gesture recognition adopts convolutional neural networks that have flaws in terms of response time and discontinuous operation. Besides, overlap intervals in network processing cause cross-frame failures that greatly reduce system performance. To mitigate these problems, a novel fast reaction gesture recognition system that slices signals in short time intervals was designed. The proposed system adopted a novel convolutional recurrent neural network (CRNN) that calculates gesture features in a short time and combines features over time. The results showed the reaction time significantly reduced from 1s to 0.2s, and accuracy improved to 100% for six gestures. Lastly, an acoustic sensor array was built to investigate the angle information of performed gestures. The direction of a gesture is a significant feature for gesture classification, which enables the same gesture in different directions to represent different actions. Previous studies mainly focused on types of gestures and analyzing approaches (e.g., Doppler Effect and channel impulse response, etc.), while the direction of gestures was not extensively studied. An acoustic gesture recognition system based on both speed information and gesture direction was developed. The system achieved 94.9% accuracy among ten different gestures from two directions. The proposed system was evaluated comparatively through numerical neural network structures, and the results confirmed that incorporating additional angle information improved the system's performance. In summary, the work presented in this thesis validates the feasibility of recognizing hand gestures using remote ultrasonic sensing across multiple platforms. The acoustic simulation explores the smartphone acoustic field distribution and response results in the context of hand gesture recognition applications. The smartphone gesture recognition system demonstrates the accuracy of recognition through ultrasound signals and conducts an analysis of classification speed. The fast reaction system proposes a more optimized solution to address the cross-frame issue using temporal neural networks, reducing the response latency to 0.2s. The speed and angle-based system provides an additional feature for gesture recognition. The established work will accelerate the development of intelligent hand gesture recognition, enrich the available gesture features, and contribute to further research in various gestures and application scenarios

    Inferring Complex Activities for Context-aware Systems within Smart Environments

    Get PDF
    The rising ageing population worldwide and the prevalence of age-related conditions such as physical fragility, mental impairments and chronic diseases have significantly impacted the quality of life and caused a shortage of health and care services. Over-stretched healthcare providers are leading to a paradigm shift in public healthcare provisioning. Thus, Ambient Assisted Living (AAL) using Smart Homes (SH) technologies has been rigorously investigated to help address the aforementioned problems. Human Activity Recognition (HAR) is a critical component in AAL systems which enables applications such as just-in-time assistance, behaviour analysis, anomalies detection and emergency notifications. This thesis is aimed at investigating challenges faced in accurately recognising Activities of Daily Living (ADLs) performed by single or multiple inhabitants within smart environments. Specifically, this thesis explores five complementary research challenges in HAR. The first study contributes to knowledge by developing a semantic-enabled data segmentation approach with user-preferences. The second study takes the segmented set of sensor data to investigate and recognise human ADLs at multi-granular action level; coarse- and fine-grained action level. At the coarse-grained actions level, semantic relationships between the sensor, object and ADLs are deduced, whereas, at fine-grained action level, object usage at the satisfactory threshold with the evidence fused from multimodal sensor data is leveraged to verify the intended actions. Moreover, due to imprecise/vague interpretations of multimodal sensors and data fusion challenges, fuzzy set theory and fuzzy web ontology language (fuzzy-OWL) are leveraged. The third study focuses on incorporating uncertainties caused in HAR due to factors such as technological failure, object malfunction, and human errors. Hence, existing studies uncertainty theories and approaches are analysed and based on the findings, probabilistic ontology (PR-OWL) based HAR approach is proposed. The fourth study extends the first three studies to distinguish activities conducted by more than one inhabitant in a shared smart environment with the use of discriminative sensor-based techniques and time-series pattern analysis. The final study investigates in a suitable system architecture with a real-time smart environment tailored to AAL system and proposes microservices architecture with sensor-based off-the-shelf and bespoke sensing methods. The initial semantic-enabled data segmentation study was evaluated with 100% and 97.8% accuracy to segment sensor events under single and mixed activities scenarios. However, the average classification time taken to segment each sensor events have suffered from 3971ms and 62183ms for single and mixed activities scenarios, respectively. The second study to detect fine-grained-level user actions was evaluated with 30 and 153 fuzzy rules to detect two fine-grained movements with a pre-collected dataset from the real-time smart environment. The result of the second study indicate good average accuracy of 83.33% and 100% but with the high average duration of 24648ms and 105318ms, and posing further challenges for the scalability of fusion rule creations. The third study was evaluated by incorporating PR-OWL ontology with ADL ontologies and Semantic-Sensor-Network (SSN) ontology to define four types of uncertainties presented in the kitchen-based activity. The fourth study illustrated a case study to extended single-user AR to multi-user AR by combining RFID tags and fingerprint sensors discriminative sensors to identify and associate user actions with the aid of time-series analysis. The last study responds to the computations and performance requirements for the four studies by analysing and proposing microservices-based system architecture for AAL system. A future research investigation towards adopting fog/edge computing paradigms from cloud computing is discussed for higher availability, reduced network traffic/energy, cost, and creating a decentralised system. As a result of the five studies, this thesis develops a knowledge-driven framework to estimate and recognise multi-user activities at fine-grained level user actions. This framework integrates three complementary ontologies to conceptualise factual, fuzzy and uncertainties in the environment/ADLs, time-series analysis and discriminative sensing environment. Moreover, a distributed software architecture, multimodal sensor-based hardware prototypes, and other supportive utility tools such as simulator and synthetic ADL data generator for the experimentation were developed to support the evaluation of the proposed approaches. The distributed system is platform-independent and currently supported by an Android mobile application and web-browser based client interfaces for retrieving information such as live sensor events and HAR results

    Proximitäts- und Aktivitätserkennung mit mobilen Endgeräten

    Get PDF
    Mit der immer größeren Verbreitung mobiler Endgeräte wie Smartphones und Tablets aber auch am Körper getragener Technik (Wearables), ist die Vision einer ubiquitär von Computern durchzogenen Welt weitgehend Realität geworden. Auf Basis dieser überall verfügbaren Technologien lassen sich mehr und mehr kontextbezogene Anwendungen umsetzen, also solche, die ihre Diensterbringung an die aktuelle Situation des Benutzers anpassen. Ein wesentliches Kontextelement ist dabei die Proximität (Nähe) eines Benutzers zu anderen Benutzern oder Objekten. Dabei ist diese Proximität nicht nur rein örtlich zu verstehen, sondern ihre Bedeutung kann auf sämtliche Kontextelemente ausgedehnt werden. Insbesondere ist auch die Übereinstimmung von Aktivitäten verschiedener Benutzer von Interesse, um deren Zusammengehörigkeit abzuleiten. Es existiert gerade im Hinblick auf örtliche Nähe eine Reihe von Standardtechnologien, die eine Proximitätserkennung grundsätzlich erlauben. Alle diese Verfahren weisen jedoch deutliche Schwächen im Hinblick auf Sicherheit und Privatsphäre der Nutzer auf. Im Rahmen dieser Arbeit werden drei neue Verfahren zur Proximitätserkennung vorgestellt. Dabei spielen die Komponenten "Ort" und "Aktivität" jeweils in unterschiedlichem Maße ein wichtige Rolle. Das erste Verfahren benutzt WLAN-Signale aus der Umgebung, um sichere, d.h. unfälschbare, Location Tags zu generieren, mit denen ein privatsphäre-schonender Proximitätstest durchgeführt werden kann. Während das erste Verfahren rein auf örtliche Nähe abzielt, berücksichtigt das zweite Verfahren implizit auch die Aktivität der betrachteten Benutzer. Der Ansatz basiert auf der Auswertung und dem Vergleich visueller Daten, die von am Körper getragenen Kameras aufgenommen werden können. Die Grundidee des dritten Verfahrens besteht darin, dass auch rein auf Basis von Aktivitäten bzw. Aktivitätssequenzen eine kontextuelle Proximität zwischen verschiedenen Nutzern festgestellt werden kann. Zur Umsetzung dieser Idee ist eine sehr feingranulare Aktivitätserkennung notwendig, deren Machbarkeit in dieser Arbeit ebenfalls gezeigt wird. Zusammengenommen werden in der vorliegenden Arbeit mehrere Wege aufgezeigt, unterschiedliche Arten von kontextueller Proximität auf sichere und privatsphäre-schützende Weise festzustellen.With the now widespread usage of mobile devices such as smartphones and tablets as well as body-worn technical gear (Wearables), the vision of a world in which computing resources are ubiquitously available has become reality. Based on these pervasively available technologies, context-aware applications, i.e., applications adapting their provided services to a user's current situation, are becoming more and more feasible. A primary element of a user's context is the proximity of the user to other users or objects. Proximity should not only be considered in a spatial manner but its meaning can be broadened to comprise any context element. In particular, the similarity of different users' activities is an important information to infer their contextual closeness. With regard to spatial proximity, there is a range of standard technologies which on principle allow to perform proximity detection. However, they all face severe problems with regard to security and privacy of the participants in the proximity test. In this work, three new approaches for proximity detection are presented. Within the newly introduced systems, the contextual components "location" and "activity" are considered with different importance. The first approach uses Wifi signals from the surroundings to construct secure, i.e., unforgeable location tags, with which a privacy-preserving proximity test can be performed. While the first method is exclusively focused on spatial proximity, the second approach also implicitly considers the users' activities. This technique is based on analyzing and comparing visual information obtained from body-mounted cameras. The basic idea of the third approach is that contextual proximity can also be obtained based on activities alone. By comparing sequences of activities, the proximity between participating users can be inferred. In order to be realizable, this approach needs very fine-grained activity recognition capabilities. The feasibility of the latter is also shown in this work. Summing up, in this work several ways are shown how to detect contextual proximity in a secure and privacy-preserving manner

    Técnicas de computación evolutiva aplicadas a la clasificación a partir de monitores de actividad física

    Get PDF
    Actualmente, varios factores están haciendo que el campo de reconocimiento de actividades humanas cobre una mayor importancia, como por ejemplo, la proliferación de dispositivos “wearables” que permiten monitorizar la actividad física o la tendencia de la población mundial hacia un estilo de vida cada vez más sedentario. Este estilo de vida sedentario propio de la sociedad actual se traduce en insuficiente actividad física y se considera uno de los mayores factores de riesgo para la salud, estando entre los primeros puestos de factores de riesgo de mortalidad a nivel mundial, según la OMS [11]. De esta manera, dentro del ámbito de la salud y el bienestar, gracias al avance en la miniaturización de sensores, que incluso permite su uso incorporados a la ropa de las personas, el reconocimiento automático de actividades se presenta como una solución a problemas de diversa índole, como por ejemplo, prevención de enfermedades, envejecimiento activo, monitorización remota de enfermos, además de un amplio espectro de aplicaciones en el ámbito deportivo. Es por esto que se convierten en dispositivos de monitorización sumamente útiles en otras áreas de investigación, introduciendo el reconocimiento de actividades humanas en la computación ubicua, el entretenimiento, el registro de actividades diarias personales o el seguimiento del rendimiento deportivo o profesional. Con la principal motivación de explorar nuevos frentes de investigación del reconocimiento de actividades, con un enfoque distinto a los planteados hasta ahora, en este trabajo se propone un sistema de reconocimiento automático de actividades que integra un algoritmo evolutivo, para la tarea de clasificación de actividades, y un enjambre de partículas, para la realización de un clustering que mejore el aprendizaje automático. El sistema ha sido evaluado mediante validación cruzada del tipo leave-one-subject-out, para comprobar su rendimiento en situaciones de reconocimiento independiente del sujeto, obteniendo un 52,37% de acierto. También, se ha evaluado el sistema con validación cruzada estándar de 10-folds en cada sujeto, para analizar la capacidad del sistema en casos de clasificación dependiente del sujeto, alcanzando un 98,07% de acierto. Un resultado significativamente más positivo que el primero, que muestra que el sistema puede tender a la personalización del reconocimiento de actividades. Además, se ha llevado a cabo la evaluación del sistema con validación cruzada estándar de 10-folds en el conjunto de todos los sujetos, con un 70,2267% de acierto, abundándose en la conclusión expuesta más arriba, de que el sistema presenta un mejor funcionamiento en situaciones de personalización del reconocimiento de actividades.In the current time, various factors are making the field of activity recognition become more important, such as the proliferation of wearable devices that allow to monitor physical activity or global population’s tendency towards a more sendentary lifestyle. This sedentary lifestyle is turning into insufficient physical activity and is considered one of the factors with a highest risk for health, being among the leading risk factors of mortality, regarding the WHO [11]. This way, within health and wellness field, thanks to the advance in sensor miniaturization, which even allows sensor usage incorporated to people clothes, activity automatic recognition is presented as a solution to very diverse problems, such as diseases prevention, active aging, patient remote monitoring, as well as a wide range of applications in sports. For that reason, wearable sensors happen to be extremely useful monitorizing devices in other research areas, introducing human activity recognition to ubiquitous computing, entertainment industry, daily life activities logging and sportive and professional perfomance monitoring, among others. With the main motivation of exploring new research horizons, through a different approach to the previous works, in this project, an activity automatic recognition system that integrates an evolutionary algorithm, for the activity classification task, and a particle swarm, for a clustering that improves the automatic learning, is proposed. The system has been evaluated with leave-one-subject-out (LOSO) cross validation, in order to assess its performance in situations where the recognition is subject independent, obtaining an accuracy rate of 52,37%. Also, the system has been evaluated with 10-fold standard cross validation within each subject, to analyze the system’s capacity in subject dependent classification cases, reaching an accuracy rate of 98,07%. A significantly more positive result than the first one, that shows the system might tend to personalization of activity recognition. In addition, the system evaluation has been carried with 10-fold standard cross validation within the whole set of all the subjects, getting an accuracy rate of 70,2267%, which supports the conclusion presented above that the system works better in situations of personalization of the activity recognition.Grado en Ingeniería Informátic

    Self-Forecasting Energy Load Stakeholders for Smart Grids

    Get PDF
    The unpredictability of energy loads is responsible for a significant portion of efficiency loss in power grids. In order to reduce load uncertainties, emerging Smart Grid business models call for the active participation of traditionally passive stakeholders. The contribution of this work enables self-forecasting energy load stakeholders whose deterministic load behaviour make them reliable resources that can greatly benefit themselves and other Smart Grid stakeholders
    corecore