48,375 research outputs found
A standard-based Body Sensor Network system proposal
Body Area Networks are a solution to remote monitoring in
order to acquire vital signals of patients. Actual sensors provide its own
interface which makes more difficult to integrate them in a system. Using
standardized protocols and interfaces increases the usability and the
integration of different sensors, to achieve this goal the IEEE 1451 standard
has been defined. This paper presents a proposal of a telemedicine
system, with an open implementation of the IEEE 1451 standard, to be
used in several different situations.Junta de AndalucĂa p08-TIC-363
Wireless body sensor networks for health-monitoring applications
This is an author-created, un-copyedited version of an article accepted for publication in
Physiological Measurement. The publisher is
not responsible for any errors or omissions in this version of the manuscript or any version
derived from it. The Version of Record is available online at http://dx.doi.org/10.1088/0967-3334/29/11/R01
Deep Detection of People and their Mobility Aids for a Hospital Robot
Robots operating in populated environments encounter many different types of
people, some of whom might have an advanced need for cautious interaction,
because of physical impairments or their advanced age. Robots therefore need to
recognize such advanced demands to provide appropriate assistance, guidance or
other forms of support. In this paper, we propose a depth-based perception
pipeline that estimates the position and velocity of people in the environment
and categorizes them according to the mobility aids they use: pedestrian,
person in wheelchair, person in a wheelchair with a person pushing them, person
with crutches and person using a walker. We present a fast region proposal
method that feeds a Region-based Convolutional Network (Fast R-CNN). With this,
we speed up the object detection process by a factor of seven compared to a
dense sliding window approach. We furthermore propose a probabilistic position,
velocity and class estimator to smooth the CNN's detections and account for
occlusions and misclassifications. In addition, we introduce a new hospital
dataset with over 17,000 annotated RGB-D images. Extensive experiments confirm
that our pipeline successfully keeps track of people and their mobility aids,
even in challenging situations with multiple people from different categories
and frequent occlusions. Videos of our experiments and the dataset are
available at http://www2.informatik.uni-freiburg.de/~kollmitz/MobilityAidsComment: 7 pages, ECMR 2017, dataset and videos:
http://www2.informatik.uni-freiburg.de/~kollmitz/MobilityAids
Towards retrieving force feedback in robotic-assisted surgery: a supervised neuro-recurrent-vision approach
Robotic-assisted minimally invasive surgeries have gained a lot of popularity over conventional procedures as they offer many benefits to both surgeons and patients. Nonetheless, they still suffer from some limitations that affect their outcome. One of them is the lack of force feedback which restricts the surgeon's sense of touch and might reduce precision during a procedure. To overcome this limitation, we propose a novel force estimation approach that combines a vision based solution with supervised learning to estimate the applied force and provide the surgeon with a suitable representation of it. The proposed solution starts with extracting the geometry of motion of the heart's surface by minimizing an energy functional to recover its 3D deformable structure. A deep network, based on a LSTM-RNN architecture, is then used to learn the relationship between the extracted visual-geometric information and the applied force, and to find accurate mapping between the two. Our proposed force estimation solution avoids the drawbacks usually associated with force sensing devices, such as biocompatibility and integration issues. We evaluate our approach on phantom and realistic tissues in which we report an average root-mean square error of 0.02 N.Peer ReviewedPostprint (author's final draft
Agile Data Offloading over Novel Fog Computing Infrastructure for CAVs
Future Connected and Automated Vehicles (CAVs) will be supervised by
cloud-based systems overseeing the overall security and orchestrating traffic
flows. Such systems rely on data collected from CAVs across the whole city
operational area. This paper develops a Fog Computing-based infrastructure for
future Intelligent Transportation Systems (ITSs) enabling an agile and reliable
off-load of CAV data. Since CAVs are expected to generate large quantities of
data, it is not feasible to assume data off-loading to be completed while a CAV
is in the proximity of a single Road-Side Unit (RSU). CAVs are expected to be
in the range of an RSU only for a limited amount of time, necessitating data
reconciliation across different RSUs, if traditional approaches to data
off-load were to be used. To this end, this paper proposes an agile Fog
Computing infrastructure, which interconnects all the RSUs so that the data
reconciliation is solved efficiently as a by-product of deploying the Random
Linear Network Coding (RLNC) technique. Our numerical results confirm the
feasibility of our solution and show its effectiveness when operated in a
large-scale urban testbed.Comment: To appear in IEEE VTC-Spring 201
- âŠ