63,860 research outputs found
DistancePPG: Robust non-contact vital signs monitoring using a camera
Vital signs such as pulse rate and breathing rate are currently measured
using contact probes. But, non-contact methods for measuring vital signs are
desirable both in hospital settings (e.g. in NICU) and for ubiquitous in-situ
health tracking (e.g. on mobile phone and computers with webcams). Recently,
camera-based non-contact vital sign monitoring have been shown to be feasible.
However, camera-based vital sign monitoring is challenging for people with
darker skin tone, under low lighting conditions, and/or during movement of an
individual in front of the camera. In this paper, we propose distancePPG, a new
camera-based vital sign estimation algorithm which addresses these challenges.
DistancePPG proposes a new method of combining skin-color change signals from
different tracked regions of the face using a weighted average, where the
weights depend on the blood perfusion and incident light intensity in the
region, to improve the signal-to-noise ratio (SNR) of camera-based estimate.
One of our key contributions is a new automatic method for determining the
weights based only on the video recording of the subject. The gains in SNR of
camera-based PPG estimated using distancePPG translate into reduction of the
error in vital sign estimation, and thus expand the scope of camera-based vital
sign monitoring to potentially challenging scenarios. Further, a dataset will
be released, comprising of synchronized video recordings of face and pulse
oximeter based ground truth recordings from the earlobe for people with
different skin tones, under different lighting conditions and for various
motion scenarios.Comment: 24 pages, 11 figure
Learning Articulated Motions From Visual Demonstration
Many functional elements of human homes and workplaces consist of rigid
components which are connected through one or more sliding or rotating
linkages. Examples include doors and drawers of cabinets and appliances;
laptops; and swivel office chairs. A robotic mobile manipulator would benefit
from the ability to acquire kinematic models of such objects from observation.
This paper describes a method by which a robot can acquire an object model by
capturing depth imagery of the object as a human moves it through its range of
motion. We envision that in future, a machine newly introduced to an
environment could be shown by its human user the articulated objects particular
to that environment, inferring from these "visual demonstrations" enough
information to actuate each object independently of the user.
Our method employs sparse (markerless) feature tracking, motion segmentation,
component pose estimation, and articulation learning; it does not require prior
object models. Using the method, a robot can observe an object being exercised,
infer a kinematic model incorporating rigid, prismatic and revolute joints,
then use the model to predict the object's motion from a novel vantage point.
We evaluate the method's performance, and compare it to that of a previously
published technique, for a variety of household objects.Comment: Published in Robotics: Science and Systems X, Berkeley, CA. ISBN:
978-0-9923747-0-
- …