1,763 research outputs found
Multispectral Video Fusion for Non-contact Monitoring of Respiratory Rate and Apnea
Continuous monitoring of respiratory activity is desirable in many clinical
applications to detect respiratory events. Non-contact monitoring of
respiration can be achieved with near- and far-infrared spectrum cameras.
However, current technologies are not sufficiently robust to be used in
clinical applications. For example, they fail to estimate an accurate
respiratory rate (RR) during apnea. We present a novel algorithm based on
multispectral data fusion that aims at estimating RR also during apnea. The
algorithm independently addresses the RR estimation and apnea detection tasks.
Respiratory information is extracted from multiple sources and fed into an RR
estimator and an apnea detector whose results are fused into a final
respiratory activity estimation. We evaluated the system retrospectively using
data from 30 healthy adults who performed diverse controlled breathing tasks
while lying supine in a dark room and reproduced central and obstructive apneic
events. Combining multiple respiratory information from multispectral cameras
improved the root mean square error (RMSE) accuracy of the RR estimation from
up to 4.64 monospectral data down to 1.60 breaths/min. The median F1 scores for
classifying obstructive (0.75 to 0.86) and central apnea (0.75 to 0.93) also
improved. Furthermore, the independent consideration of apnea detection led to
a more robust system (RMSE of 4.44 vs. 7.96 breaths/min). Our findings may
represent a step towards the use of cameras for vital sign monitoring in
medical applications
Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges and Opportunities
The vast proliferation of sensor devices and Internet of Things enables the
applications of sensor-based activity recognition. However, there exist
substantial challenges that could influence the performance of the recognition
system in practical scenarios. Recently, as deep learning has demonstrated its
effectiveness in many areas, plenty of deep methods have been investigated to
address the challenges in activity recognition. In this study, we present a
survey of the state-of-the-art deep learning methods for sensor-based human
activity recognition. We first introduce the multi-modality of the sensory data
and provide information for public datasets that can be used for evaluation in
different challenge tasks. We then propose a new taxonomy to structure the deep
methods by challenges. Challenges and challenge-related deep methods are
summarized and analyzed to form an overview of the current research progress.
At the end of this work, we discuss the open issues and provide some insights
for future directions
Sensing with Earables: A Systematic Literature Review and Taxonomy of Phenomena
Earables have emerged as a unique platform for ubiquitous computing by augmenting ear-worn devices with state-of-the-art sensing. This new platform has spurred a wealth of new research exploring what can be detected on a wearable, small form factor. As a sensing platform, the ears are less susceptible to motion artifacts and are located in close proximity to a number of important anatomical structures including the brain, blood vessels, and facial muscles which reveal a wealth of information. They can be easily reached by the hands and the ear canal itself is affected by mouth, face, and head movements. We have conducted a systematic literature review of 271 earable publications from the ACM and IEEE libraries. These were synthesized into an open-ended taxonomy of 47 different phenomena that can be sensed in, on, or around the ear. Through analysis, we identify 13 fundamental phenomena from which all other phenomena can be derived, and discuss the different sensors and sensing principles used to detect them. We comprehensively review the phenomena in four main areas of (i) physiological monitoring and health, (ii) movement and activity, (iii) interaction, and (iv) authentication and identification. This breadth highlights the potential that earables have to offer as a ubiquitous, general-purpose platform
Sensing with Earables: A Systematic Literature Review and Taxonomy of Phenomena
Earables have emerged as a unique platform for ubiquitous computing by augmenting ear-worn devices with state-of-the-art sensing. This new platform has spurred a wealth of new research exploring what can be detected on a wearable, small form factor. As a sensing platform, the ears are less susceptible to motion artifacts and are located in close proximity to a number of important anatomical structures including the brain, blood vessels, and facial muscles which reveal a wealth of information. They can be easily reached by the hands and the ear canal itself is affected by mouth, face, and head movements. We have conducted a systematic literature review of 271 earable publications from the ACM and IEEE libraries. These were synthesized into an open-ended taxonomy of 47 different phenomena that can be sensed in, on, or around the ear. Through analysis, we identify 13 fundamental phenomena from which all other phenomena can be derived, and discuss the different sensors and sensing principles used to detect them. We comprehensively review the phenomena in four main areas of (i) physiological monitoring and health, (ii) movement and activity, (iii) interaction, and (iv) authentication and identification. This breadth highlights the potential that earables have to offer as a ubiquitous, general-purpose platform
BodyScan: Enabling Radio-based Sensing on Wearable Devices for Contactless Activity and Vital Sign Monitoring
Wearable devices are increasingly becoming mainstream consumer products carried by millions of consumers. However, the potential impact of these devices is currently constrained by fundamental limitations of their built-in sensors. In this paper, we introduce radio as a new powerful sensing modality for wearable devices and propose to transform radio into a mobile sensor of human activities and vital signs. We present BodyScan, a wearable system that enables radio to act as a single modality capable of providing whole-body continuous sensing of the user. BodyScan overcomes key limitations of existing wearable devices by providing a contactless and privacy-preserving approach to capturing a rich variety of human activities and vital sign information. Our prototype design of BodyScan is comprised of two components: one worn on the hip and the other worn on the wrist, and is inspired by the increasingly prevalent scenario where a user carries a smartphone while also wearing a wristband/smartwatch. This prototype can support daily usage with one single charge per day. Experimental results show that in controlled settings, BodyScan can recognize a diverse set of human activities while also estimating the user's breathing rate with high accuracy. Even in very challenging real-world settings, BodyScan can still infer activities with an average accuracy above 60% and monitor breathing rate information a reasonable amount of time during each day
MultiIoT: Towards Large-scale Multisensory Learning for the Internet of Things
The Internet of Things (IoT), the network integrating billions of smart
physical devices embedded with sensors, software, and communication
technologies for the purpose of connecting and exchanging data with other
devices and systems, is a critical and rapidly expanding component of our
modern world. The IoT ecosystem provides a rich source of real-world modalities
such as motion, thermal, geolocation, imaging, depth, sensors, video, and audio
for prediction tasks involving the pose, gaze, activities, and gestures of
humans as well as the touch, contact, pose, 3D of physical objects. Machine
learning presents a rich opportunity to automatically process IoT data at
scale, enabling efficient inference for impact in understanding human
wellbeing, controlling physical devices, and interconnecting smart cities. To
develop machine learning technologies for IoT, this paper proposes MultiIoT,
the most expansive IoT benchmark to date, encompassing over 1.15 million
samples from 12 modalities and 8 tasks. MultiIoT introduces unique challenges
involving (1) learning from many sensory modalities, (2) fine-grained
interactions across long temporal ranges, and (3) extreme heterogeneity due to
unique structure and noise topologies in real-world sensors. We also release a
set of strong modeling baselines, spanning modality and task-specific methods
to multisensory and multitask models to encourage future research in
multisensory representation learning for IoT
Design of Novel Sensors and Instruments for Minimally Invasive Lung Tumour Localization via Palpation
Minimally Invasive Thoracoscopic Surgery (MITS) has become the treatment of choice for lung cancer. However, MITS prevents the surgeons from using manual palpation, thereby often making it challenging to reliably locate the tumours for resection. This thesis presents the design, analysis and validation of novel tactile sensors, a novel miniature force sensor, a robotic instrument, and a wireless hand-held instrument to address this limitation. The low-cost, disposable tactile sensors have been shown to easily detect a 5 mm tumour located 10 mm deep in soft tissue. The force sensor can measure six degrees of freedom forces and torques with temperature compensation using a single optical fiber. The robotic instrument is compatible with the da Vinci surgical robot and allows the use of tactile sensing, force sensing and ultrasound to localize the tumours. The wireless hand-held instrument allows the use of tactile sensing in procedures where a robot is not available
- …