483 research outputs found

    Non-contact Multimodal Indoor Human Monitoring Systems: A Survey

    Full text link
    Indoor human monitoring systems leverage a wide range of sensors, including cameras, radio devices, and inertial measurement units, to collect extensive data from users and the environment. These sensors contribute diverse data modalities, such as video feeds from cameras, received signal strength indicators and channel state information from WiFi devices, and three-axis acceleration data from inertial measurement units. In this context, we present a comprehensive survey of multimodal approaches for indoor human monitoring systems, with a specific focus on their relevance in elderly care. Our survey primarily highlights non-contact technologies, particularly cameras and radio devices, as key components in the development of indoor human monitoring systems. Throughout this article, we explore well-established techniques for extracting features from multimodal data sources. Our exploration extends to methodologies for fusing these features and harnessing multiple modalities to improve the accuracy and robustness of machine learning models. Furthermore, we conduct comparative analysis across different data modalities in diverse human monitoring tasks and undertake a comprehensive examination of existing multimodal datasets. This extensive survey not only highlights the significance of indoor human monitoring systems but also affirms their versatile applications. In particular, we emphasize their critical role in enhancing the quality of elderly care, offering valuable insights into the development of non-contact monitoring solutions applicable to the needs of aging populations.Comment: 19 pages, 5 figure

    Sensing and Signal Processing in Smart Healthcare

    Get PDF
    In the last decade, we have witnessed the rapid development of electronic technologies that are transforming our daily lives. Such technologies are often integrated with various sensors that facilitate the collection of human motion and physiological data and are equipped with wireless communication modules such as Bluetooth, radio frequency identification, and near-field communication. In smart healthcare applications, designing ergonomic and intuitive human–computer interfaces is crucial because a system that is not easy to use will create a huge obstacle to adoption and may significantly reduce the efficacy of the solution. Signal and data processing is another important consideration in smart healthcare applications because it must ensure high accuracy with a high level of confidence in order for the applications to be useful for clinicians in making diagnosis and treatment decisions. This Special Issue is a collection of 10 articles selected from a total of 26 contributions. These contributions span the areas of signal processing and smart healthcare systems mostly contributed by authors from Europe, including Italy, Spain, France, Portugal, Romania, Sweden, and Netherlands. Authors from China, Korea, Taiwan, Indonesia, and Ecuador are also included

    Low-Cost Sensors and Biological Signals

    Get PDF
    Many sensors are currently available at prices lower than USD 100 and cover a wide range of biological signals: motion, muscle activity, heart rate, etc. Such low-cost sensors have metrological features allowing them to be used in everyday life and clinical applications, where gold-standard material is both too expensive and time-consuming to be used. The selected papers present current applications of low-cost sensors in domains such as physiotherapy, rehabilitation, and affective technologies. The results cover various aspects of low-cost sensor technology from hardware design to software optimization

    Robust Signal Processing Techniques for Wearable Inertial Measurement Unit (IMU) Sensors

    Get PDF
    Activity and gesture recognition using wearable motion sensors, also known as inertial measurement units (IMUs), provides important context for many ubiquitous sensing applications including healthcare monitoring, human computer interface and context-aware smart homes and offices. Such systems are gaining popularity due to their minimal cost and ability to provide sensing functionality at any time and place. However, several factors can affect the system performance such as sensor location and orientation displacement, activity and gesture inconsistency, movement speed variation and lack of tiny motion information. This research is focused on developing signal processing solutions to ensure the system robustness with respect to these factors. Firstly, for existing systems which have already been designed to work with certain sensor orientation/location, this research proposes opportunistic calibration algorithms leveraging camera information from the environment to ensure the system performs correctly despite location or orientation displacement of the sensors. The calibration algorithms do not require extra effort from the users and the calibration is done seamlessly when the users present in front of an environmental camera and perform arbitrary movements. Secondly, an orientation independent and speed independent approach is proposed and studied by exploring a novel orientation independent feature set and by intelligently selecting only the relevant and consistent portions of various activities and gestures. Thirdly, in order to address the challenge that the IMU is not able capture tiny motion which is important to some applications, a sensor fusion framework is proposed to fuse the complementary sensor modality in order to enhance the system performance and robustness. For example, American Sign Language has a large vocabulary of signs and a recognition system solely based on IMU sensors would not perform very well. In order to demonstrate the feasibility of sensor fusion techniques, a robust real-time American Sign Language recognition approach is developed using wrist worn IMU and surface electromyography (EMG) sensors

    Robust Signal Processing Techniques for Wearable Inertial Measurement Unit (IMU) Sensors

    Get PDF
    Activity and gesture recognition using wearable motion sensors, also known as inertial measurement units (IMUs), provides important context for many ubiquitous sensing applications including healthcare monitoring, human computer interface and context-aware smart homes and offices. Such systems are gaining popularity due to their minimal cost and ability to provide sensing functionality at any time and place. However, several factors can affect the system performance such as sensor location and orientation displacement, activity and gesture inconsistency, movement speed variation and lack of tiny motion information. This research is focused on developing signal processing solutions to ensure the system robustness with respect to these factors. Firstly, for existing systems which have already been designed to work with certain sensor orientation/location, this research proposes opportunistic calibration algorithms leveraging camera information from the environment to ensure the system performs correctly despite location or orientation displacement of the sensors. The calibration algorithms do not require extra effort from the users and the calibration is done seamlessly when the users present in front of an environmental camera and perform arbitrary movements. Secondly, an orientation independent and speed independent approach is proposed and studied by exploring a novel orientation independent feature set and by intelligently selecting only the relevant and consistent portions of various activities and gestures. Thirdly, in order to address the challenge that the IMU is not able capture tiny motion which is important to some applications, a sensor fusion framework is proposed to fuse the complementary sensor modality in order to enhance the system performance and robustness. For example, American Sign Language has a large vocabulary of signs and a recognition system solely based on IMU sensors would not perform very well. In order to demonstrate the feasibility of sensor fusion techniques, a robust real-time American Sign Language recognition approach is developed using wrist worn IMU and surface electromyography (EMG) sensors

    Human action recognition and mobility assessment in smart environments with RGB-D sensors

    Get PDF
    openQuesta attività di ricerca è focalizzata sullo sviluppo di algoritmi e soluzioni per ambienti intelligenti sfruttando sensori RGB e di profondità. In particolare, gli argomenti affrontati fanno riferimento alla valutazione della mobilità di un soggetto e al riconoscimento di azioni umane. Riguardo il primo tema, l'obiettivo è quello di implementare algoritmi per l'estrazione di parametri oggettivi che possano supportare la valutazione di test di mobilità svolta da personale sanitario. Il primo algoritmo proposto riguarda l'estrazione di sei joints sul piano sagittale utilizzando i dati di profondità forniti dal sensore Kinect. La precisione in termini di stima degli angoli di busto e ginocchio nella fase di sit-to-stand viene valutata considerando come riferimento un sistema stereofotogrammetrico basato su marker. Un secondo algoritmo viene proposto per facilitare la realizzazione del test in ambiente domestico e per consentire l'estrazione di un maggior numero di parametri dall'esecuzione del test Timed Up and Go. I dati di Kinect vengono combinati con quelli di un accelerometro attraverso un algoritmo di sincronizzazione, costituendo un setup che può essere utilizzato anche per altre applicazioni che possono beneficiare dell'utilizzo congiunto di dati RGB, profondità ed inerziali. Vengono quindi proposti algoritmi di rilevazione della caduta che sfruttano la stessa configurazione del Timed Up and Go test. Per quanto riguarda il secondo argomento affrontato, l'obiettivo è quello di effettuare la classificazione di azioni che possono essere compiute dalla persona all'interno di un ambiente domestico. Vengono quindi proposti due algoritmi di riconoscimento attività i quali utilizzano i joints dello scheletro di Kinect e sfruttano un SVM multiclasse per il riconoscimento di azioni appartenenti a dataset pubblicamente disponibili, raggiungendo risultati confrontabili con lo stato dell'arte rispetto ai dataset CAD-60, KARD, MSR Action3D.This research activity is focused on the development of algorithms and solutions for smart environments exploiting RGB and depth sensors. In particular, the addressed topics refer to mobility assessment of a subject and to human action recognition. Regarding the first topic, the goal is to implement algorithms for the extraction of objective parameters that can support the assessment of mobility tests performed by healthcare staff. The first proposed algorithm regards the extraction of six joints on the sagittal plane using depth data provided by Kinect sensor. The accuracy in terms of estimation of torso and knee angles in the sit-to-stand phase is evaluated considering a marker-based stereometric system as a reference. A second algorithm is proposed to simplify the test implementation in home environment and to allow the extraction of a greater number of parameters from the execution of the Timed Up and Go test. Kinect data are combined with those of an accelerometer through a synchronization algorithm constituting a setup that can be used also for other applications that benefit from the joint usage of RGB, depth and inertial data. Fall detection algorithms exploiting the same configuration of the Timed Up and Go test are therefore proposed. Regarding the second topic addressed, the goal is to perform the classification of human actions that can be carried out in home environment. Two algorithms for human action recognition are therefore proposed, which exploit skeleton joints of Kinect and a multi-class SVM for the recognition of actions belonging to publicly available datasets, achieving results comparable with the state of the art in the datasets CAD-60, KARD, MSR Action3D.INGEGNERIA DELL'INFORMAZIONECippitelli, EneaCippitelli, Ene

    Human action recognition and mobility assessment in smart environments with RGB-D sensors

    Get PDF
    Questa attività di ricerca è focalizzata sullo sviluppo di algoritmi e soluzioni per ambienti intelligenti sfruttando sensori RGB e di profondità. In particolare, gli argomenti affrontati fanno riferimento alla valutazione della mobilità di un soggetto e al riconoscimento di azioni umane. Riguardo il primo tema, l'obiettivo è quello di implementare algoritmi per l'estrazione di parametri oggettivi che possano supportare la valutazione di test di mobilità svolta da personale sanitario. Il primo algoritmo proposto riguarda l'estrazione di sei joints sul piano sagittale utilizzando i dati di profondità forniti dal sensore Kinect. La precisione in termini di stima degli angoli di busto e ginocchio nella fase di sit-to-stand viene valutata considerando come riferimento un sistema stereofotogrammetrico basato su marker. Un secondo algoritmo viene proposto per facilitare la realizzazione del test in ambiente domestico e per consentire l'estrazione di un maggior numero di parametri dall'esecuzione del test Timed Up and Go. I dati di Kinect vengono combinati con quelli di un accelerometro attraverso un algoritmo di sincronizzazione, costituendo un setup che può essere utilizzato anche per altre applicazioni che possono beneficiare dell'utilizzo congiunto di dati RGB, profondità ed inerziali. Vengono quindi proposti algoritmi di rilevazione della caduta che sfruttano la stessa configurazione del Timed Up and Go test. Per quanto riguarda il secondo argomento affrontato, l'obiettivo è quello di effettuare la classificazione di azioni che possono essere compiute dalla persona all'interno di un ambiente domestico. Vengono quindi proposti due algoritmi di riconoscimento attività i quali utilizzano i joints dello scheletro di Kinect e sfruttano un SVM multiclasse per il riconoscimento di azioni appartenenti a dataset pubblicamente disponibili, raggiungendo risultati confrontabili con lo stato dell'arte rispetto ai dataset CAD-60, KARD, MSR Action3D.This research activity is focused on the development of algorithms and solutions for smart environments exploiting RGB and depth sensors. In particular, the addressed topics refer to mobility assessment of a subject and to human action recognition. Regarding the first topic, the goal is to implement algorithms for the extraction of objective parameters that can support the assessment of mobility tests performed by healthcare staff. The first proposed algorithm regards the extraction of six joints on the sagittal plane using depth data provided by Kinect sensor. The accuracy in terms of estimation of torso and knee angles in the sit-to-stand phase is evaluated considering a marker-based stereometric system as a reference. A second algorithm is proposed to simplify the test implementation in home environment and to allow the extraction of a greater number of parameters from the execution of the Timed Up and Go test. Kinect data are combined with those of an accelerometer through a synchronization algorithm constituting a setup that can be used also for other applications that benefit from the joint usage of RGB, depth and inertial data. Fall detection algorithms exploiting the same configuration of the Timed Up and Go test are therefore proposed. Regarding the second topic addressed, the goal is to perform the classification of human actions that can be carried out in home environment. Two algorithms for human action recognition are therefore proposed, which exploit skeleton joints of Kinect and a multi-class SVM for the recognition of actions belonging to publicly available datasets, achieving results comparable with the state of the art in the datasets CAD-60, KARD, MSR Action3D

    Human gait assessment using a 3D marker-less multimodal motion capture system

    Get PDF
    Gait analysis is the measurement, processing and systematic interpretation of biomechanical parameters that characterize human locomotion. It supports the identification of movement limitations and development of rehabilitation procedures. Accurate Gait analysis is important in sports analysis, medical field, and rehabilitation. Although Gait analysis is performed in several laboratories in many countries, there are many issues such as: (i) the high cost of precise Motion Capture systems; (ii) the scarcity of qualified personnel to operate them; (iii) expertise required to interpret their results; (iv) space requirements to install and store these systems; as well as difficulties related to the measurement protocols of each system; (vi) limited availability (vii) and the use of markers can be a barrier for some clinical use cases (e.g. patients recovering from orthopedics surgeries). In this work, we present a low cost and more accessible system based on the integration of a Multiple Microsoft Kinect sensors and multiple Shimmer inertial sensors to capture human Gait. The novel multimodal system combines data from inertial and 3D depth cameras and outputs spatiotemporal Gait variables. A comparison of this system with the VICON system (the gold standard in Motion Capture) was performed. Our relatively low-cost marker-less multimodal motion generates a complete 360-degree skeleton view. We compare our system with the VICON via gait spatiotemporal variables: Gait cycle time, stride time, Gait length (distance between two strides), stride length, and velocity. The system was also evaluated with knee and hip joint angles measurement accuracy. The results show high correlation for spatiotemporal variables and joint angles inside the 95% bootstrap prediction when compared with VICON

    Enhancing volleyball training:empowering athletes and coaches through advanced sensing and analysis

    Get PDF
    Modern sensing technologies and data analysis methods usher in a new era for sports training and practice. Hidden insights can be uncovered and interactive training environments can be created by means of data analysis. We present a system to support volleyball training which makes use of Inertial Measurement Units, a pressure sensitive display floor, and machine learning techniques to automatically detect relevant behaviours and provides the user with the appropriate information. While working with trainers and amateur athletes, we also explore potential applications that are driven by automatic action recognition, that contribute various requirements to the platform. The first application is an automatic video-tagging protocol that marks key events (captured on video) based on the automatic recognition of volleyball-specific actions with an unweighted average recall of 78.71% in the 10-fold cross-validation setting with convolution neural network and 73.84% in leave-one-subject-out cross-validation setting with active data representation method using wearable sensors, as an exemplification of how dashboard and retrieval systems would work with the platform. In the context of action recognition, we have evaluated statistical functions and their transformation using active data representation besides raw signal of IMUs sensor. The second application is the “bump-set-spike” trainer, which uses automatic action recognition to provide real-time feedback about performance to steer player behaviour in volleyball, as an example of rich learning environments enabled by live action detection. In addition to describing these applications, we detail the system components and architecture and discuss the implications that our system might have for sports in general and for volleyball in particular.</p
    corecore