10,544 research outputs found
DistancePPG: Robust non-contact vital signs monitoring using a camera
Vital signs such as pulse rate and breathing rate are currently measured
using contact probes. But, non-contact methods for measuring vital signs are
desirable both in hospital settings (e.g. in NICU) and for ubiquitous in-situ
health tracking (e.g. on mobile phone and computers with webcams). Recently,
camera-based non-contact vital sign monitoring have been shown to be feasible.
However, camera-based vital sign monitoring is challenging for people with
darker skin tone, under low lighting conditions, and/or during movement of an
individual in front of the camera. In this paper, we propose distancePPG, a new
camera-based vital sign estimation algorithm which addresses these challenges.
DistancePPG proposes a new method of combining skin-color change signals from
different tracked regions of the face using a weighted average, where the
weights depend on the blood perfusion and incident light intensity in the
region, to improve the signal-to-noise ratio (SNR) of camera-based estimate.
One of our key contributions is a new automatic method for determining the
weights based only on the video recording of the subject. The gains in SNR of
camera-based PPG estimated using distancePPG translate into reduction of the
error in vital sign estimation, and thus expand the scope of camera-based vital
sign monitoring to potentially challenging scenarios. Further, a dataset will
be released, comprising of synchronized video recordings of face and pulse
oximeter based ground truth recordings from the earlobe for people with
different skin tones, under different lighting conditions and for various
motion scenarios.Comment: 24 pages, 11 figure
An Efficient and Cost Effective FPGA Based Implementation of the Viola-Jones Face Detection Algorithm
We present an field programmable gate arrays (FPGA) based implementation of the popular Viola-Jones face detection algorithm, which is an essential building block in many applications such as video surveillance and tracking. Our implementation is a complete system level hardware design described in a hardware description language and validated on the affordable DE2-115 evaluation board. Our primary objective is to study the achievable performance with a low-end FPGA chip based implementation. In addition, we release to the public domain the entire project. We hope that this will enable other researchers to easily replicate and compare their results to ours and that it will encourage and facilitate further research and educational ideas in the areas of image processing, computer vision, and advanced digital design and FPGA prototyping
The H1 Forward Proton Spectrometer at HERA
The forward proton spectrometer is part of the H1 detector at the HERA
collider. Protons with energies above 500 GeV and polar angles below 1 mrad can
be detected by this spectrometer. The main detector components are
scintillating fiber detectors read out by position-sensitive photo-multipliers.
These detectors are housed in so-called Roman Pots which allow them to be moved
close to the circulating proton beam. Four Roman Pot stations are located at
distances between 60 m and 90 m from the interaction point.Comment: 20 pages, 10 figures, submitted to Nucl.Instr.and Method
Detect or Track: Towards Cost-Effective Video Object Detection/Tracking
State-of-the-art object detectors and trackers are developing fast. Trackers
are in general more efficient than detectors but bear the risk of drifting. A
question is hence raised -- how to improve the accuracy of video object
detection/tracking by utilizing the existing detectors and trackers within a
given time budget? A baseline is frame skipping -- detecting every N-th frames
and tracking for the frames in between. This baseline, however, is suboptimal
since the detection frequency should depend on the tracking quality. To this
end, we propose a scheduler network, which determines to detect or track at a
certain frame, as a generalization of Siamese trackers. Although being
light-weight and simple in structure, the scheduler network is more effective
than the frame skipping baselines and flow-based approaches, as validated on
ImageNet VID dataset in video object detection/tracking.Comment: Accepted to AAAI 201
Interactive multiple object learning with scanty human supervision
© 2016. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/We present a fast and online human-robot interaction approach that progressively learns multiple object classifiers using scanty human supervision. Given an input video stream recorded during the human robot interaction, the user just needs to annotate a small fraction of frames to compute object specific classifiers based on random ferns which share the same features. The resulting methodology is fast (in a few seconds, complex object appearances can be learned), versatile (it can be applied to unconstrained scenarios), scalable (real experiments show we can model up to 30 different object classes), and minimizes the amount of human intervention by leveraging the uncertainty measures associated to each classifier.; We thoroughly validate the approach on synthetic data and on real sequences acquired with a mobile platform in indoor and outdoor scenarios containing a multitude of different objects. We show that with little human assistance, we are able to build object classifiers robust to viewpoint changes, partial occlusions, varying lighting and cluttered backgrounds. (C) 2016 Elsevier Inc. All rights reserved.Peer ReviewedPostprint (author's final draft
- …