125 research outputs found

    Special Issue on Wearable Computing and Machine Learning for Applications in Sports, Health, and Medical Engineering

    Get PDF
    Note: In lieu of an abstract, this is an excerpt from the first page. Recent advancement in digital technologies is driving a remarkable transformation in sports, health, and medical engineering, aiming to achieve the accurate quantification of performance, well-being, and disease condition, and the optimization of sports, clinical, and therapeutic training and treatment programs. Traditionally, understanding and monitoring of functional performance and capacity has been performed in gait laboratories based on optoelectronic motion capture systems. However, gait laboratories in practical settings are often not readily available because the systems are costly and require trained experts to operate. Most importantly, when assessments are restricted to laboratory settings, they provide a narrow snapshot of function and do not capture functionality in natural free-living settings, thus representing a severely under-sampled view of an individual’s condition. The use of mobile and wearable technologies has been explored in many sports, health, and medical research studies examining individuals in “in-the-wild” settings. Among the most important drivers of this transformation are (1) wearable sensors and (2) signal processing and machine learning algorithms. Wearable sensors are capable of collecting physical and/or physiological data continuously and seamlessly outside of laboratory settings. Signal processing and machine learning algorithms allow data-driven approaches for analyzing considerable amounts of multidimensional sensory data and for extracting important information relevant to the mentioned application areas (e.g., validating the efficacy of sports training, health benefits, and chronic disease progression). These technologies together would support how sports and clinical professionals understand and interpret individuals’ performance more objectively, and enable proactive, evidence-based, and personalized management systems

    Online at Will: A Novel Protocol for Mutual Authentication in Peer-to-Peer Networks for Patient-Centered Health Care Information Systems

    Get PDF
    Patient-centered health care information systems (PHSs) on peer-to-peer (P2P) networks promise decentralization benefits. P2P PHSs, such as decentralized personal health records or interoperable Covid-19 proximity trackers, can enhance data sovereignty and resilience to single points of failure, but the openness of P2P networks introduces new security issues. We propose a novel, simple, and secure mutual authentication protocol that supports offline access, leverages independent and stateless encryption services, and enables patients and medical professionals to establish secure connections when using P2P PHSs. Our protocol includes a virtual smart card (software-based) feature to ease integration of authentication features of emerging national health-IT infrastructures. The security evaluation shows that our protocol resists most online and offline threats while exhibiting performance comparable to traditional, albeit less secure, password-based authentication methods. Our protocol serves as foundation for the design and implementation of P2P PHSs that will make use of P2P PHSs more secure and trustworthy

    How to Learn from Risk: Explicit Risk-Utility Reinforcement Learning for Efficient and Safe Driving Strategies

    Full text link
    Autonomous driving has the potential to revolutionize mobility and is hence an active area of research. In practice, the behavior of autonomous vehicles must be acceptable, i.e., efficient, safe, and interpretable. While vanilla reinforcement learning (RL) finds performant behavioral strategies, they are often unsafe and uninterpretable. Safety is introduced through Safe RL approaches, but they still mostly remain uninterpretable as the learned behaviour is jointly optimized for safety and performance without modeling them separately. Interpretable machine learning is rarely applied to RL. This paper proposes SafeDQN, which allows to make the behavior of autonomous vehicles safe and interpretable while still being efficient. SafeDQN offers an understandable, semantic trade-off between the expected risk and the utility of actions while being algorithmically transparent. We show that SafeDQN finds interpretable and safe driving policies for a variety of scenarios and demonstrate how state-of-the-art saliency techniques can help to assess both risk and utility.Comment: 8 pages, 5 figure

    Estimation of Gait Kinematics and Kinetics from Inertial Sensor Data Using Optimal Control of Musculoskeletal Models

    Get PDF
    Inertial sensing enables field studies of human movement and ambulant assessment of patients. However, the challenge is to obtain a comprehensive analysis from low-quality data and sparse measurements. In this paper, we present a method to estimate gait kinematics and kinetics directly from raw inertial sensor data performing a single dynamic optimization. We formulated an optimal control problem to track accelerometer and gyroscope data with a planar musculoskeletal model. In addition, we minimized muscular effort to ensure a unique solution and to prevent the model from tracking noisy measurements too closely. For evaluation, we recorded data of ten subjects walking and running at six different speeds using seven inertial measurement units (IMUs). Results were compared to a conventional analysis using optical motion capture and a force plate. High correlations were achieved for gait kinematics (rho \u3e= 0.93) and kinetics (rho \u3e= 0.90). In contrast to existing IMU processing methods, a dynamically consistent simulation was obtained and we were able to estimate running kinetics. Besides kinematics and kinetics, further metrics such as muscle activations and metabolic cost can be directly obtained from simulated model movements. In summary, the method is insensitive to sensor noise and drift and provides a detailed analysis solely based on inertial sensor data

    Contrastive Language-Image Pretrained Models are Zero-Shot Human Scanpath Predictors

    Full text link
    Understanding the mechanisms underlying human attention is a fundamental challenge for both vision science and artificial intelligence. While numerous computational models of free-viewing have been proposed, less is known about the mechanisms underlying task-driven image exploration. To address this gap, we present CapMIT1003, a database of captions and click-contingent image explorations collected during captioning tasks. CapMIT1003 is based on the same stimuli from the well-known MIT1003 benchmark, for which eye-tracking data under free-viewing conditions is available, which offers a promising opportunity to concurrently study human attention under both tasks. We make this dataset publicly available to facilitate future research in this field. In addition, we introduce NevaClip, a novel zero-shot method for predicting visual scanpaths that combines contrastive language-image pretrained (CLIP) models with biologically-inspired neural visual attention (NeVA) algorithms. NevaClip simulates human scanpaths by aligning the representation of the foveated visual stimulus and the representation of the associated caption, employing gradient-driven visual exploration to generate scanpaths. Our experimental results demonstrate that NevaClip outperforms existing unsupervised computational models of human visual attention in terms of scanpath plausibility, for both captioning and free-viewing tasks. Furthermore, we show that conditioning NevaClip with incorrect or misleading captions leads to random behavior, highlighting the significant impact of caption guidance in the decision-making process. These findings contribute to a better understanding of mechanisms that guide human attention and pave the way for more sophisticated computational approaches to scanpath prediction that can integrate direct top-down guidance of downstream tasks

    Active Learning of Ordinal Embeddings: A User Study on Football Data

    Full text link
    Humans innately measure distance between instances in an unlabeled dataset using an unknown similarity function. Distance metrics can only serve as proxy for similarity in information retrieval of similar instances. Learning a good similarity function from human annotations improves the quality of retrievals. This work uses deep metric learning to learn these user-defined similarity functions from few annotations for a large football trajectory dataset. We adapt an entropy-based active learning method with recent work from triplet mining to collect easy-to-answer but still informative annotations from human participants and use them to train a deep convolutional network that generalizes to unseen samples. Our user study shows that our approach improves the quality of the information retrieval compared to a previous deep metric learning approach that relies on a Siamese network. Specifically, we shed light on the strengths and weaknesses of passive sampling heuristics and active learners alike by analyzing the participants' response efficacy. To this end, we collect accuracy, algorithmic time complexity, the participants' fatigue and time-to-response, qualitative self-assessment and statements, as well as the effects of mixed-expertise annotators and their consistency on model performance and transfer-learning.Comment: 23 pages, 17 figure

    An Overview of Smart Shoes in the Internet of Health Things: Gait and Mobility Assessment in Health Promotion and Disease Monitoring

    Get PDF
    New smart technologies and the internet of things increasingly play a key role in healthcare and wellness, contributing to the development of novel healthcare concepts. These technologies enable a comprehensive view of an individual’s movement and mobility, potentially supporting healthy living as well as complementing medical diagnostics and the monitoring of therapeutic outcomes. This overview article specifically addresses smart shoes, which are becoming one such smart technology within the future internet of health things, since the ability to walk defines large aspects of quality of life in a wide range of health and disease conditions. Smart shoes offer the possibility to support prevention, diagnostic work-up, therapeutic decisions, and individual disease monitoring with a continuous assessment of gait and mobility. This overview article provides the technological as well as medical aspects of smart shoes within this rising area of digital health applications, and is designed especially for the novel reader in this specific field. It also stresses the need for closer interdisciplinary interactions between technological and medical experts to bridge the gap between research and practice. Smart shoes can be envisioned to serve as pervasive wearable computing systems that enable innovative solutions and services for the promotion of healthy living and the transformation of health care
    • 

    corecore