145 research outputs found

    Hardware for recognition of human activities: a review of smart home and AAL related technologies

    Get PDF
    Activity recognition (AR) from an applied perspective of ambient assisted living (AAL) and smart homes (SH) has become a subject of great interest. Promising a better quality of life, AR applied in contexts such as health, security, and energy consumption can lead to solutions capable of reaching even the people most in need. This study was strongly motivated because levels of development, deployment, and technology of AR solutions transferred to society and industry are based on software development, but also depend on the hardware devices used. The current paper identifies contributions to hardware uses for activity recognition through a scientific literature review in the Web of Science (WoS) database. This work found four dominant groups of technologies used for AR in SH and AAL—smartphones, wearables, video, and electronic components—and two emerging technologies: Wi-Fi and assistive robots. Many of these technologies overlap across many research works. Through bibliometric networks analysis, the present review identified some gaps and new potential combinations of technologies for advances in this emerging worldwide field and their uses. The review also relates the use of these six technologies in health conditions, health care, emotion recognition, occupancy, mobility, posture recognition, localization, fall detection, and generic activity recognition applications. The above can serve as a road map that allows readers to execute approachable projects and deploy applications in different socioeconomic contexts, and the possibility to establish networks with the community involved in this topic. This analysis shows that the research field in activity recognition accepts that specific goals cannot be achieved using one single hardware technology, but can be using joint solutions, this paper shows how such technology works in this regard

    Device-free indoor localisation with non-wireless sensing techniques : a thesis by publications presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Electronics and Computer Engineering, Massey University, Albany, New Zealand

    Get PDF
    Global Navigation Satellite Systems provide accurate and reliable outdoor positioning to support a large number of applications across many sectors. Unfortunately, such systems do not operate reliably inside buildings due to the signal degradation caused by the absence of a clear line of sight with the satellites. The past two decades have therefore seen intensive research into the development of Indoor Positioning System (IPS). While considerable progress has been made in the indoor localisation discipline, there is still no widely adopted solution. The proliferation of Internet of Things (IoT) devices within the modern built environment provides an opportunity to localise human subjects by utilising such ubiquitous networked devices. This thesis presents the development, implementation and evaluation of several passive indoor positioning systems using ambient Visible Light Positioning (VLP), capacitive-flooring, and thermopile sensors (low-resolution thermal cameras). These systems position the human subject in a device-free manner (i.e., the subject is not required to be instrumented). The developed systems improve upon the state-of-the-art solutions by offering superior position accuracy whilst also using more robust and generalised test setups. The developed passive VLP system is one of the first reported solutions making use of ambient light to position a moving human subject. The capacitive-floor based system improves upon the accuracy of existing flooring solutions as well as demonstrates the potential for automated fall detection. The system also requires very little calibration, i.e., variations of the environment or subject have very little impact upon it. The thermopile positioning system is also shown to be robust to changes in the environment and subjects. Improvements are made over the current literature by testing across multiple environments and subjects whilst using a robust ground truth system. Finally, advanced machine learning methods were implemented and benchmarked against a thermopile dataset which has been made available for other researchers to use

    Developing a person guidance module for hospital robots

    Get PDF
    This dissertation describes the design and implementation of the Person Guidance Module (PGM) that enables the IWARD (Intelligent Robot Swarm for attendance, Recognition, Cleaning and delivery) base robot to offer route guidance service to the patients or visitors inside the hospital arena. One of the common problems encountered in huge hospital buildings today is foreigners not being able to find their way around in the hospital. Although there are a variety of guide robots currently existing on the market and offering a wide range of guidance and related activities, they do not fit into the modular concept of the IWARD project. The PGM features a robust and foolproof non-hierarchical sensor fusion approach of an active RFID, stereovision and cricket mote sensor for guiding a patient to the X-ray room, or a visitor to a patient’s ward in every possible scenario in a complex, dynamic and crowded hospital environment. Moreover, the speed of the robot can be adjusted automatically according to the pace of the follower for physical comfort using this system. Furthermore, the module performs these tasks in any unconstructed environment solely from a robot’s onboard perceptual resources in order to limit the hardware installation costs and therefore the indoor setting support. Similar comprehensive solution in one single platform has remained elusive in existing literature. The finished module can be connected to any IWARD base robot using quick-change mechanical connections and standard electrical connections. The PGM module box is equipped with a Gumstix embedded computer for all module computing which is powered up automatically once the module box is inserted into the robot. In line with the general software architecture of the IWARD project, all software modules are developed as Orca2 components and cross-complied for Gumstix’s XScale processor. To support standardized communication between different software components, Internet Communications Engine (Ice) has been used as middleware. Additionally, plug-and-play capabilities have been developed and incorporated so that swarm system is aware at all times of which robot is equipped with PGM. Finally, in several field trials in hospital environments, the person guidance module has shown its suitability for a challenging real-world application as well as the necessary user acceptance

    A robust system for counting people using an infrared sensor and a camera

    Get PDF
    In this paper, a multi-modal solution to the people counting problem in a given area is described. The multi-modal system consists of a differential pyro-electric infrared (PIR) sensor and a camera. Faces in the surveillance area are detected by the camera with the aim of counting people using cascaded AdaBoost classifiers. Due to the imprecise results produced by the camera-only system, an additional differential PIR sensor is integrated to the camera. Two types of human motion: (i) entry to and exit from the surveillance area and (ii) ordinary activities in that area are distinguished by the PIR sensor using a Markovian decision algorithm. The wavelet transform of the continuous-time real-valued signal received from the PIR sensor circuit is used for feature extraction from the sensor signal. Wavelet parameters are then fed to a set of Markov models representing the two motion classes. The affiliation of a test signal is decided as the class of the model yielding higher probability. People counting results produced by the camera are then corrected by utilizing the additional information obtained from the PIR sensor signal analysis. With the proof of concept built, it is shown that the multi-modal system can reduce false alarms of the camera-only system and determines the number of people watching a TV set in a more robust manner. © 2015 Elsevier B.V. All rights reserved

    Neural Networks for Indoor Human Activity Reconstructions

    Get PDF
    Low cost, ubiquitous, tagless, and privacy aware indoor monitoring is essential to many existing or future applications, such as assisted living of elderly persons. We explore how well different types of neural networks in basic configurations can extract location and movement information from noisy experimental data (with both high-pitch and slow drift noise) obtained from capacitive sensors operating in loading mode at ranges much longer that the diagonal of their plates. Through design space exploration, we optimize and analyze the location and trajectory tracking inference performance of multilayer perceptron (MLP), autoregressive feedforward, 1D Convolutional (1D-CNN), and Long-Short Term Memory (LSTM) neural networks on experimental data collected using four capacitive sensors with 16 cm x 16 cm plates deployed on the boundaries of a 3 m x 3 m open space in our laboratory. We obtain the minimum error using a 1D-CNN [0.251 m distance Root Mean Square Error (RMSE) and 0.307 m Average Distance Error (ADE)] and the smoothest trajectory inference using an LSTM, albeit with higher localization errors (0.281 m RMSE and 0.326 m ADE). 1D Convolutional and window-based neural networks have best inference accuracy and smoother trajectory reconstruction. LSTMs seem to infer best the person movement dynamics

    Recent development of respiratory rate measurement technologies

    Get PDF
    Respiratory rate (RR) is an important physiological parameter whose abnormity has been regarded as an important indicator of serious illness. In order to make RR monitoring simple to do, reliable and accurate, many different methods have been proposed for such automatic monitoring. According to the theory of respiratory rate extraction, methods are categorized into three modalities: extracting RR from other physiological signals, RR measurement based on respiratory movements, and RR measurement based on airflow. The merits and limitations of each method are highlighted and discussed. In addition, current works are summarized to suggest key directions for the development of future RR monitoring methodologies
    corecore