475 research outputs found

    Central monitoring system for ambient assisted living

    Get PDF
    Smart homes for aged care enable the elderly to stay in their own homes longer. By means of various types of ambient and wearable sensors information is gathered on people living in smart homes for aged care. This information is then processed to determine the activities of daily living (ADL) and provide vital information to carers. Many examples of smart homes for aged care can be found in literature, however, little or no evidence can be found with respect to interoperability of various sensors and devices along with associated functions. One key element with respect to interoperability is the central monitoring system in a smart home. This thesis analyses and presents key functions and requirements of a central monitoring system. The outcomes of this thesis may benefit developers of smart homes for aged care

    From data acquisition to data fusion : a comprehensive review and a roadmap for the identification of activities of daily living using mobile devices

    Get PDF
    This paper focuses on the research on the state of the art for sensor fusion techniques, applied to the sensors embedded in mobile devices, as a means to help identify the mobile device user’s daily activities. Sensor data fusion techniques are used to consolidate the data collected from several sensors, increasing the reliability of the algorithms for the identification of the different activities. However, mobile devices have several constraints, e.g., low memory, low battery life and low processing power, and some data fusion techniques are not suited to this scenario. The main purpose of this paper is to present an overview of the state of the art to identify examples of sensor data fusion techniques that can be applied to the sensors available in mobile devices aiming to identify activities of daily living (ADLs)

    Activity and Health Status Monitoring System

    Get PDF
    Physical activity monitoring represents an important tool in supporting/encouraging vulnerable persons in their struggle to recover from surgery or long term illness promoting a healthy lifestyle. The paper proposes a smart, low power activity monitoring platform capable to acquire data from 4 inertial sensor modules placed on human body, temporarily store it on a mobile phone for real time data display or on a server for long term data analysis

    A study of deep neural networks for human activity recognition

    Get PDF
    Human activity recognition and deep learning are two fields that have attracted attention in recent years. The former due to its relevance in many application domains, such as ambient assisted living or health monitoring, and the latter for its recent and excellent performance achievements in different domains of application such as image and speech recognition. In this article, an extensive analysis among the most suited deep learning architectures for activity recognition is conducted to compare its performance in terms of accuracy, speed, and memory requirements. In particular, convolutional neural networks (CNN), long short‐term memory networks (LSTM), bidirectional LSTM (biLSTM), gated recurrent unit networks (GRU), and deep belief networks (DBN) have been tested on a total of 10 publicly available datasets, with different sensors, sets of activities, and sampling rates. All tests have been designed under a multimodal approach to take advantage of synchronized raw sensor' signals. Results show that CNNs are efficient at capturing local temporal dependencies of activity signals, as well as at identifying correlations among sensors. Their performance in activity classification is comparable with, and in most cases better than, the performance of recurrent models. Their faster response and lower memory footprint make them the architecture of choice for wearable and IoT devices

    Sensing via signal analysis, analytics, and cyberbiometric patterns

    Get PDF
    Includes bibliographical references.2022 Fall.Internet-connected, or Internet of Things (IoT), sensor technologies have been increasingly incorporated into everyday technology and processes. Their functions are situationally dependent and have been used for vital recordings such as electrocardiograms, gait analysis and step counting, fall detection, and environmental analysis. For instance, environmental sensors, which exist through various technologies, are used to monitor numerous domains, including but not limited to pollution, water quality, and the presence of biota, among others. Past research into IoT sensors has varied depending on the technology. For instance, previous environmental gas sensor IoT research has focused on (i) the development of these sensors for increased sensitivity and increased lifetimes, (ii) integration of these sensors into sensor arrays to combat cross-sensitivity and background interferences, and (iii) sensor network development, including communication between widely dispersed sensors in a large-scale environment. IoT inertial measurement units (IMU's), such as accelerometers and gyroscopes, have been previously researched for gait analysis, movement detection, and gesture recognition, which are often related to human-computer interface (HCI). Methods of IoT Device feature-based pattern recognition for machine learning (ML) and artificial intelligence (AI) are frequently investigated as well, including primitive classification methods and deep learning techniques. The result of this research gives insight into each of these topics individually, i.e., using a specific sensor technology to detect carbon monoxide in an indoor environment, or using accelerometer readings for gesture recognition. Less research has been performed on analyzing the systems aspects of the IoT sensors themselves. However, an important part of attaining overall situational awareness is authenticating the surroundings, which in the case of IoT means the individual sensors, humans interacting with the sensors, and other elements of the surroundings. There is a clear opportunity for the systematic evaluation of the identity and performance of an IoT sensor/sensor array within a system that is to be utilized for "full situational awareness". This awareness may include (i) non-invasive diagnostics (i.e., what is occurring inside the body), (ii) exposure analysis (i.e., what has gone into the body through both respiratory and eating/drinking pathways), and (iii) potential risk of exposure (i.e., what the body is exposed to environmentally). Simultaneously, the system has the capability to harbor security measures through the same situational assessment in the form of multiple levels of biometrics. Through the interconnective abilities of the IoT sensors, it is possible to integrate these capabilities into one portable, hand-held system. The system will exist within a "magic wand", which will be used to collect the various data needed to assess the environment of the user, both inside and outside of their bodies. The device can also be used to authenticate the user, as well as the system components, to discover potential deception within the system. This research introduces levels of biometrics for various scenarios through the investigation of challenge-based biometrics; that is, biometrics based upon how the sensor, user, or subject of study responds to a challenge. These will be applied to multiple facets surrounding "situational awareness" for living beings, non-human beings, and non-living items or objects (which we have termed "abiometrics"). Gesture recognition for intent of sensing was first investigated as a means of deliberate activation of sensors/sensor arrays for situational awareness while providing a level of user authentication through biometrics. Equine gait analysis was examined next, and the level of injury in the lame limbs of the horse was quantitatively measured and classified using data from IoT sensors. Finally, a method of evaluating the identity and health of a sensor/sensory array was examined through different challenges to their environments
    corecore