4,201 research outputs found
Gait Velocity Estimation using time interleaved between Consecutive Passive IR Sensor Activations
Gait velocity has been consistently shown to be an important indicator and
predictor of health status, especially in older adults. It is often assessed
clinically, but the assessments occur infrequently and do not allow optimal
detection of key health changes when they occur. In this paper, we show that
the time gap between activations of a pair of Passive Infrared (PIR) motion
sensors installed in the consecutively visited room pair carry rich latent
information about a person's gait velocity. We name this time gap transition
time and show that despite a six second refractory period of the PIR sensors,
transition time can be used to obtain an accurate representation of gait
velocity.
Using a Support Vector Regression (SVR) approach to model the relationship
between transition time and gait velocity, we show that gait velocity can be
estimated with an average error less than 2.5 cm/sec. This is demonstrated with
data collected over a 5 year period from 74 older adults monitored in their own
homes.
This method is simple and cost effective and has advantages over competing
approaches such as: obtaining 20 to 100x more gait velocity measurements per
day and offering the fusion of location-specific information with time stamped
gait estimates. These advantages allow stable estimates of gait parameters
(maximum or average speed, variability) at shorter time scales than current
approaches. This also provides a pervasive in-home method for context-aware
gait velocity sensing that allows for monitoring of gait trajectories in space
and time
Robot Autonomy for Surgery
Autonomous surgery involves having surgical tasks performed by a robot
operating under its own will, with partial or no human involvement. There are
several important advantages of automation in surgery, which include increasing
precision of care due to sub-millimeter robot control, real-time utilization of
biosignals for interventional care, improvements to surgical efficiency and
execution, and computer-aided guidance under various medical imaging and
sensing modalities. While these methods may displace some tasks of surgical
teams and individual surgeons, they also present new capabilities in
interventions that are too difficult or go beyond the skills of a human. In
this chapter, we provide an overview of robot autonomy in commercial use and in
research, and present some of the challenges faced in developing autonomous
surgical robots
Towards retrieving force feedback in robotic-assisted surgery: a supervised neuro-recurrent-vision approach
Robotic-assisted minimally invasive surgeries have gained a lot of popularity over conventional procedures as they offer many benefits to both surgeons and patients. Nonetheless, they still suffer from some limitations that affect their outcome. One of them is the lack of force feedback which restricts the surgeon's sense of touch and might reduce precision during a procedure. To overcome this limitation, we propose a novel force estimation approach that combines a vision based solution with supervised learning to estimate the applied force and provide the surgeon with a suitable representation of it. The proposed solution starts with extracting the geometry of motion of the heart's surface by minimizing an energy functional to recover its 3D deformable structure. A deep network, based on a LSTM-RNN architecture, is then used to learn the relationship between the extracted visual-geometric information and the applied force, and to find accurate mapping between the two. Our proposed force estimation solution avoids the drawbacks usually associated with force sensing devices, such as biocompatibility and integration issues. We evaluate our approach on phantom and realistic tissues in which we report an average root-mean square error of 0.02 N.Peer ReviewedPostprint (author's final draft
The 2023 wearable photoplethysmography roadmap
Photoplethysmography is a key sensing technology which is used in wearable devices such as smartwatches and fitness trackers. Currently, photoplethysmography sensors are used to monitor physiological parameters including heart rate and heart rhythm, and to track activities like sleep and exercise. Yet, wearable photoplethysmography has potential to provide much more information on health and wellbeing, which could inform clinical decision making. This Roadmap outlines directions for research and development to realise the full potential of wearable photoplethysmography. Experts discuss key topics within the areas of sensor design, signal processing, clinical applications, and research directions. Their perspectives provide valuable guidance to researchers developing wearable photoplethysmography technology
Sensors for Vital Signs Monitoring
Sensor technology for monitoring vital signs is an important topic for various service applications, such as entertainment and personalization platforms and Internet of Things (IoT) systems, as well as traditional medical purposes, such as disease indication judgments and predictions. Vital signs for monitoring include respiration and heart rates, body temperature, blood pressure, oxygen saturation, electrocardiogram, blood glucose concentration, brain waves, etc. Gait and walking length can also be regarded as vital signs because they can indirectly indicate human activity and status. Sensing technologies include contact sensors such as electrocardiogram (ECG), electroencephalogram (EEG), photoplethysmogram (PPG), non-contact sensors such as ballistocardiography (BCG), and invasive/non-invasive sensors for diagnoses of variations in blood characteristics or body fluids. Radar, vision, and infrared sensors can also be useful technologies for detecting vital signs from the movement of humans or organs. Signal processing, extraction, and analysis techniques are important in industrial applications along with hardware implementation techniques. Battery management and wireless power transmission technologies, the design and optimization of low-power circuits, and systems for continuous monitoring and data collection/transmission should also be considered with sensor technologies. In addition, machine-learning-based diagnostic technology can be used for extracting meaningful information from continuous monitoring data
Edge-centric Optimization of Multi-modal ML-driven eHealth Applications
Smart eHealth applications deliver personalized and preventive digital
healthcare services to clients through remote sensing, continuous monitoring,
and data analytics. Smart eHealth applications sense input data from multiple
modalities, transmit the data to edge and/or cloud nodes, and process the data
with compute intensive machine learning (ML) algorithms. Run-time variations
with continuous stream of noisy input data, unreliable network connection,
computational requirements of ML algorithms, and choice of compute placement
among sensor-edge-cloud layers affect the efficiency of ML-driven eHealth
applications. In this chapter, we present edge-centric techniques for optimized
compute placement, exploration of accuracy-performance trade-offs, and
cross-layered sense-compute co-optimization for ML-driven eHealth applications.
We demonstrate the practical use cases of smart eHealth applications in
everyday settings, through a sensor-edge-cloud framework for an objective pain
assessment case study
Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions
Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future
- …