3,451 research outputs found

    Local Positioning Systems in (Game) Sports

    Get PDF
    Position data of players and athletes are widely used in sports performance analysis for measuring the amounts of physical activities as well as for tactical assessments in game sports. However, positioning sensing systems are applied in sports as tools to gain objective information of sports behavior rather than as components of intelligent spaces (IS). The paper outlines the idea of IS for the sports context with special focus to game sports and how intelligent sports feedback systems can benefit from IS. Henceforth, the most common location sensing techniques used in sports and their practical application are reviewed, as location is among the most important enabling techniques for IS. Furthermore, the article exemplifies the idea of IS in sports on two applications

    A Human-Centric Metaverse Enabled by Brain-Computer Interface: A Survey

    Full text link
    The growing interest in the Metaverse has generated momentum for members of academia and industry to innovate toward realizing the Metaverse world. The Metaverse is a unique, continuous, and shared virtual world where humans embody a digital form within an online platform. Through a digital avatar, Metaverse users should have a perceptual presence within the environment and can interact and control the virtual world around them. Thus, a human-centric design is a crucial element of the Metaverse. The human users are not only the central entity but also the source of multi-sensory data that can be used to enrich the Metaverse ecosystem. In this survey, we study the potential applications of Brain-Computer Interface (BCI) technologies that can enhance the experience of Metaverse users. By directly communicating with the human brain, the most complex organ in the human body, BCI technologies hold the potential for the most intuitive human-machine system operating at the speed of thought. BCI technologies can enable various innovative applications for the Metaverse through this neural pathway, such as user cognitive state monitoring, digital avatar control, virtual interactions, and imagined speech communications. This survey first outlines the fundamental background of the Metaverse and BCI technologies. We then discuss the current challenges of the Metaverse that can potentially be addressed by BCI, such as motion sickness when users experience virtual environments or the negative emotional states of users in immersive virtual applications. After that, we propose and discuss a new research direction called Human Digital Twin, in which digital twins can create an intelligent and interactable avatar from the user's brain signals. We also present the challenges and potential solutions in synchronizing and communicating between virtual and physical entities in the Metaverse

    PhysioGait: Context-Aware Physiological Context Modeling for Person Re-identification Attack on Wearable Sensing

    Full text link
    Person re-identification is a critical privacy breach in publicly shared healthcare data. We investigate the possibility of a new type of privacy threat on publicly shared privacy insensitive large scale wearable sensing data. In this paper, we investigate user specific biometric signatures in terms of two contextual biometric traits, physiological (photoplethysmography and electrodermal activity) and physical (accelerometer) contexts. In this regard, we propose PhysioGait, a context-aware physiological signal model that consists of a Multi-Modal Siamese Convolutional Neural Network (mmSNN) which learns the spatial and temporal information individually and performs sensor fusion in a Siamese cost with the objective of predicting a person's identity. We evaluated PhysioGait attack model using 4 real-time collected datasets (3-data under IRB #HP-00064387 and one publicly available data) and two combined datasets achieving 89% - 93% accuracy of re-identifying persons.Comment: Accepted in IEEE MSN 2022. arXiv admin note: substantial text overlap with arXiv:2106.1190

    Wearable in-ear pulse oximetry: theory and applications

    Get PDF
    Wearable health technology, most commonly in the form of the smart watch, is employed by millions of users worldwide. These devices generally exploit photoplethysmography (PPG), the non-invasive use of light to measure blood volume, in order to track physiological metrics such as pulse and respiration. Moreover, PPG is commonly used in hospitals in the form of pulse oximetry, which measures light absorbance by the blood at different wavelengths of light to estimate blood oxygen levels (SpO2). This thesis aims to demonstrate that despite its widespread usage over many decades, this sensor still possesses a wealth of untapped value. Through a combination of advanced signal processing and harnessing the ear as a location for wearable sensing, this thesis introduces several novel high impact applications of in-ear pulse oximetry and photoplethysmography. The aims of this thesis are accomplished through a three pronged approach: rapid detection of hypoxia, tracking of cognitive workload and fatigue, and detection of respiratory disease. By means of the simultaneous recording of in-ear and finger pulse oximetry at rest and during breath hold tests, it was found that in-ear SpO2 responds on average 12.4 seconds faster than the finger SpO2. This is likely due in part to the ear being in close proximity to the brain, making it a priority for oxygenation and thus making wearable in-ear SpO2 a good proxy for core blood oxygen. Next, the low latency of in-ear SpO2 was further exploited in the novel application of classifying cognitive workload. It was found that in-ear pulse oximetry was able to robustly detect tiny decreases in blood oxygen during increased cognitive workload, likely caused by increased brain metabolism. This thesis demonstrates that in-ear SpO2 can be used to accurately distinguish between different levels of an N-back memory task, representing different levels of mental effort. This concept was further validated through its application to gaming and then extended to the detection of driver related fatigue. It was found that features derived from SpO2 and PPG were predictive of absolute steering wheel angle, which acts as a proxy for fatigue. The strength of in-ear PPG for the monitoring of respiration was investigated with respect to the finger, with the conclusion that in-ear PPG exhibits far stronger respiration induced intensity variations and pulse amplitude variations than the finger. All three respiratory modes were harnessed through multivariate empirical mode decomposition (MEMD) to produce spirometry-like respiratory waveforms from PPG. It was discovered that these PPG derived respiratory waveforms can be used to detect obstruction to breathing, both through a novel apparatus for the simulation of breathing disorders and through the classification of chronic obstructive pulmonary disease (COPD) in the real world. This thesis establishes in-ear pulse oximetry as a wearable technology with the potential for immense societal impact, with applications from the classification of cognitive workload and the prediction of driver fatigue, through to the detection of chronic obstructive pulmonary disease. The experiments and analysis in this thesis conclusively demonstrate that widely used pulse oximetry and photoplethysmography possess a wealth of untapped value, in essence teaching the old PPG sensor new tricks.Open Acces

    Predicting Driver Fatigue in Automated Driving with Explainability

    Full text link
    Research indicates that monotonous automated driving increases the incidence of fatigued driving. Although many prediction models based on advanced machine learning techniques were proposed to monitor driver fatigue, especially in manual driving, little is known about how these black-box machine learning models work. In this paper, we proposed a combination of eXtreme Gradient Boosting (XGBoost) and SHAP (SHapley Additive exPlanations) to predict driver fatigue with explanations due to their efficiency and accuracy. First, in order to obtain the ground truth of driver fatigue, PERCLOS (percentage of eyelid closure over the pupil over time) between 0 and 100 was used as the response variable. Second, we built a driver fatigue regression model using both physiological and behavioral measures with XGBoost and it outperformed other selected machine learning models with 3.847 root-mean-squared error (RMSE), 1.768 mean absolute error (MAE) and 0.996 adjusted R2R^2. Third, we employed SHAP to identify the most important predictor variables and uncovered the black-box XGBoost model by showing the main effects of most important predictor variables globally and explaining individual predictions locally. Such an explainable driver fatigue prediction model offered insights into how to intervene in automated driving when necessary, such as during the takeover transition period from automated driving to manual driving

    Towards unravelling the relationship between on-body, environmental and emotion data using sensor information fusion approach

    Get PDF
    Over the past few years, there has been a noticeable advancement in environmental models and information fusion systems taking advantage of the recent developments in sensor and mobile technologies. However, little attention has been paid so far to quantifying the relationship between environment changes and their impact on our bodies in real-life settings. In this paper, we identify a data driven approach based on direct and continuous sensor data to assess the impact of the surrounding environment and physiological changes and emotion. We aim at investigating the potential of fusing on-body physiological signals, environmental sensory data and on-line self-report emotion measures in order to achieve the following objectives: (1) model the short term impact of the ambient environment on human body, (2) predict emotions based on-body sensors and environmental data. To achieve this, we have conducted a real-world study ‘in the wild’ with on-body and mobile sensors. Data was collected from participants walking around Nottingham city centre, in order to develop analytical and predictive models. Multiple regression, after allowing for possible confounders, showed a noticeable correlation between noise exposure and heart rate. Similarly, UV and environmental noise have been shown to have a noticeable effect on changes in ElectroDermal Activity (EDA). Air pressure demonstrated the greatest contribution towards the detected changes in body temperature and motion. Also, significant correlation was found between air pressure and heart rate. Finally, decision fusion of the classification results from different modalities is performed. To the best of our knowledge this work presents the first attempt at fusing and modelling data from environmental and physiological sources collected from sensors in a real-world setting

    Virtual Reality Adaptation Using Electrodermal Activity to Support the User Experience

    Get PDF
    Virtual reality is increasingly used for tasks such as work and education. Thus, rendering scenarios that do not interfere with such goals and deplete user experience are becoming progressively more relevant. We present a physiologically adaptive system that optimizes the virtual environment based on physiological arousal, i.e., electrodermal activity. We investigated the usability of the adaptive system in a simulated social virtual reality scenario. Participants completed an n-back task (primary) and a visual detection (secondary) task. Here, we adapted the visual complexity of the secondary task in the form of the number of non-player characters of the secondary task to accomplish the primary task. We show that an adaptive virtual reality can improve users' comfort by adapting to physiological arousal regarding the task complexity. Our findings suggest that physiologically adaptive virtual reality systems can improve users' experience in a wide range of scenarios

    Usability of Upper Limb Electromyogram Features as Muscle Fatigue Indicators for Better Adaptation of Human-Robot Interactions

    Get PDF
    Human-robot interaction (HRI) is the process of humans and robots working together to accomplish a goal with the objective of making the interaction beneficial to humans. Closed loop control and adaptability to individuals are some of the important acceptance criteria for human-robot interaction systems. While designing an HRI interaction scheme, it is important to understand the users of the system and evaluate the capabilities of humans and robots. An acceptable HRI solution is expected to be adaptable by detecting and responding to the changes in the environment and its users. Hence, an adaptive robotic interaction will require a better sensing of the human performance parameters. Human performance is influenced by the state of muscular and mental fatigue during active interactions. Researchers in the field of human-robot interaction have been trying to improve the adaptability of the environment according to the physical state of the human participants. Existing human-robot interactions and robot assisted trainings are designed without sufficiently considering the implications of fatigue to the users. Given this, identifying if better outcome can be achieved during a robot-assisted training by adapting to individual muscular status, i.e. with respect to fatigue, is a novel area of research. This has potential applications in scenarios such as rehabilitation robotics. Since robots have the potential to deliver a large number of repetitions, they can be used for training stroke patients to improve their muscular disabilities through repetitive training exercises. The objective of this research is to explore a solution for a longer and less fatiguing robot-assisted interaction, which can adapt based on the muscular state of participants using fatigue indicators derived from electromyogram (EMG) measurements. In the initial part of this research, fatigue indicators from upper limb muscles of healthy participants were identified by analysing the electromyogram signals from the muscles as well as the kinematic data collected by the robot. The tasks were defined to have point-to-point upper limb movements, which involved dynamic muscle contractions, while interacting with the HapticMaster robot. The study revealed quantitatively, which muscles were involved in the exercise and which muscles were more fatigued. The results also indicated the potential of EMG and kinematic parameters to be used as fatigue indicators. A correlation analysis between EMG features and kinematic parameters revealed that the correlation coefficient was impacted by muscle fatigue. As an extension of this study, the EMG collected at the beginning of the task was also used to predict the type of point-to-point movements using a supervised machine learning algorithm based on Support Vector Machines. The results showed that the movement intention could be detected with a reasonably good accuracy within the initial milliseconds of the task. The final part of the research implemented a fatigue-adaptive algorithm based on the identified EMG features. An experiment was conducted with thirty healthy participants to test the effectiveness of this adaptive algorithm. The participants interacted with the HapticMaster robot following a progressive muscle strength training protocol similar to a standard sports science protocol for muscle strengthening. The robotic assistance was altered according to the muscular state of participants, and, thus, offering varying difficulty levels based on the states of fatigue or relaxation, while performing the tasks. The results showed that the fatigue-based robotic adaptation has resulted in a prolonged training interaction, that involved many repetitions of the task. This study showed that using fatigue indicators, it is possible to alter the level of challenge, and thus, increase the interaction time. In summary, the research undertaken during this PhD has successfully enhanced the adaptability of human-robot interaction. Apart from its potential use for muscle strength training in healthy individuals, the work presented in this thesis is applicable in a wide-range of humanmachine interaction research such as rehabilitation robotics. This has a potential application in robot-assisted upper limb rehabilitation training of stroke patients

    Fatigue-Aware gaming system for motor rehabilitation using biocybernetic loops.

    Get PDF
    Esta tesis tiene como objetivo proponer una terapia de rehabilitación complementaria basada en paradigmas de interacción humano-computadora (HCI) que exploran i) Técnicas de rehabilitación virtual, integrando tecnologías de realidad virtual (VR) sofisticadas y (hoy en día) accesibles, ii) sensores fisiológicos de bajo costo, a saber, electromiografía de superficie (sEMG) y iii)sistema inteligente, a través de adaptación biocibernética, para proporcionar una nueva técnica de rehabilitación virtual..
    • 

    corecore