681 research outputs found
3D Human Pose Estimation on a Configurable Bed from a Pressure Image
Robots have the potential to assist people in bed, such as in healthcare
settings, yet bedding materials like sheets and blankets can make observation
of the human body difficult for robots. A pressure-sensing mat on a bed can
provide pressure images that are relatively insensitive to bedding materials.
However, prior work on estimating human pose from pressure images has been
restricted to 2D pose estimates and flat beds. In this work, we present two
convolutional neural networks to estimate the 3D joint positions of a person in
a configurable bed from a single pressure image. The first network directly
outputs 3D joint positions, while the second outputs a kinematic model that
includes estimated joint angles and limb lengths. We evaluated our networks on
data from 17 human participants with two bed configurations: supine and seated.
Our networks achieved a mean joint position error of 77 mm when tested with
data from people outside the training set, outperforming several baselines. We
also present a simple mechanical model that provides insight into ambiguity
associated with limbs raised off of the pressure mat, and demonstrate that
Monte Carlo dropout can be used to estimate pose confidence in these
situations. Finally, we provide a demonstration in which a mobile manipulator
uses our network's estimated kinematic model to reach a location on a person's
body in spite of the person being seated in a bed and covered by a blanket.Comment: 8 pages, 10 figure
Multidimensional Capacitive Sensing for Robot-Assisted Dressing and Bathing
Robotic assistance presents an opportunity to benefit the lives of many
people with physical disabilities, yet accurately sensing the human body and
tracking human motion remain difficult for robots. We present a
multidimensional capacitive sensing technique that estimates the local pose of
a human limb in real time. A key benefit of this sensing method is that it can
sense the limb through opaque materials, including fabrics and wet cloth. Our
method uses a multielectrode capacitive sensor mounted to a robot's end
effector. A neural network model estimates the position of the closest point on
a person's limb and the orientation of the limb's central axis relative to the
sensor's frame of reference. These pose estimates enable the robot to move its
end effector with respect to the limb using feedback control. We demonstrate
that a PR2 robot can use this approach with a custom six electrode capacitive
sensor to assist with two activities of daily living-dressing and bathing. The
robot pulled the sleeve of a hospital gown onto able-bodied participants' right
arms, while tracking human motion. When assisting with bathing, the robot moved
a soft wet washcloth to follow the contours of able-bodied participants' limbs,
cleaning their surfaces. Overall, we found that multidimensional capacitive
sensing presents a promising approach for robots to sense and track the human
body during assistive tasks that require physical human-robot interaction.Comment: 8 pages, 16 figures, International Conference on Rehabilitation
Robotics 201
Under the Cover Infant Pose Estimation using Multimodal Data
Infant pose monitoring during sleep has multiple applications in both
healthcare and home settings. In a healthcare setting, pose detection can be
used for region of interest detection and movement detection for noncontact
based monitoring systems. In a home setting, pose detection can be used to
detect sleep positions which has shown to have a strong influence on multiple
health factors. However, pose monitoring during sleep is challenging due to
heavy occlusions from blanket coverings and low lighting. To address this, we
present a novel dataset, Simultaneously-collected multimodal Mannequin Lying
pose (SMaL) dataset, for under the cover infant pose estimation. We collect
depth and pressure imagery of an infant mannequin in different poses under
various cover conditions. We successfully infer full body pose under the cover
by training state-of-art pose estimation methods and leveraging existing
multimodal adult pose datasets for transfer learning. We demonstrate a
hierarchical pretraining strategy for transformer-based models to significantly
improve performance on our dataset. Our best performing model was able to
detect joints under the cover within 25mm 86% of the time with an overall mean
error of 16.9mm. Data, code and models publicly available at
https://github.com/DanielKyr/SMa
Modeling Humans at Rest with Applications to Robot Assistance
Humans spend a large part of their lives resting. Machine perception of this class of body poses would be beneficial to numerous applications, but it is complicated by line-of-sight occlusion from bedding. Pressure sensing mats are a promising alternative, but data is challenging to collect at scale. To overcome this, we use modern physics engines to simulate bodies resting on a soft bed with a pressure sensing mat. This method can efficiently generate data at scale for training deep neural networks. We present a deep model trained on this data that infers 3D human pose and body shape from a pressure image, and show that it transfers well to real world data. We also present a model that infers pose, shape and contact pressure from a depth image facing the person in bed, and it does so in the presence of blankets. This model similarly benefits from synthetic data, which is created by simulating blankets on the bodies in bed. We evaluate this model on real world data and compare it to an existing method that requires RGB, depth, thermal and pressure imagery in the input. Our model only requires an input depth image, yet it is 12% more accurate. Our methods are relevant to applications in healthcare, including patient acuity monitoring and pressure injury prevention. We demonstrate this work in the context of robotic caregiving assistance, by using it to control a robot to move to locations on a person’s body in bed.Ph.D
LInKs "Lifting Independent Keypoints" -- Partial Pose Lifting for Occlusion Handling with Improved Accuracy in 2D-3D Human Pose Estimation
We present LInKs, a novel unsupervised learning method to recover 3D human
poses from 2D kinematic skeletons obtained from a single image, even when
occlusions are present. Our approach follows a unique two-step process, which
involves first lifting the occluded 2D pose to the 3D domain, followed by
filling in the occluded parts using the partially reconstructed 3D coordinates.
This lift-then-fill approach leads to significantly more accurate results
compared to models that complete the pose in 2D space alone. Additionally, we
improve the stability and likelihood estimation of normalising flows through a
custom sampling function replacing PCA dimensionality reduction previously used
in prior work. Furthermore, we are the first to investigate if different parts
of the 2D kinematic skeleton can be lifted independently which we find by
itself reduces the error of current lifting approaches. We attribute this to
the reduction of long-range keypoint correlations. In our detailed evaluation,
we quantify the error under various realistic occlusion scenarios, showcasing
the versatility and applicability of our model. Our results consistently
demonstrate the superiority of handling all types of occlusions in 3D space
when compared to others that complete the pose in 2D space. Our approach also
exhibits consistent accuracy in scenarios without occlusion, as evidenced by a
7.9% reduction in reconstruction error compared to prior works on the Human3.6M
dataset. Furthermore, our method excels in accurately retrieving complete 3D
poses even in the presence of occlusions, making it highly applicable in
situations where complete 2D pose information is unavailable
NASA Tech Briefs, Februrary 2013
Topics covered include: Measurements of Ultra-Stable Oscillator (USO) Allan Deviations in Space; Gaseous Nitrogen Orifice Mass Flow Calculator; Validation of Proposed Metrics for Two-Body Abrasion Scratch Test Analysis Standards; Rover Low Gain Antenna Qualification for Deep Space Thermal Environments; Automated, Ultra-Sterile Solid Sample Handling and Analysis on a Chip; Measuring and Estimating Normalized Contrast in Infrared Flash Thermography; Spectrally and Radiometrically Stable, Wideband, Onboard Calibration Source; High-Reliability Waveguide Vacuum/Pressure Window; Methods of Fabricating Scintillators With Radioisotopes for Beta Battery Applications; Magnetic Shield for Adiabatic Demagnetization Refrigerators (ADR); CMOS-Compatible SOI MESFETS for Radiation-Hardened DC-to-DC Converters; Silicon Heat Pipe Array; Adaptive Phase Delay Generator; High-Temperature, Lightweight, Self-Healing Ceramic Composites for Aircraft Engine Applications; Treatment to Control Adhesion of Silicone-Based Elastomers; High-Temperature Adhesives for Thermally Stable Aero-Assist Technologies; Rockballer Sample Acquisition Tool; Rock Gripper for Sampling, Mobility, Anchoring, and Manipulation; Advanced Magnetic Materials Methods and Numerical Models for Fluidization in Microgravity and Hypogravity; Data Transfer for Multiple Sensor Networks Over a Broad Temperature Range; Using Combustion Synthesis to Reinforce Berms and Other Regolith Structures; Visible-Infrared Hyperspectral Image Projector; Three-Axis Attitude Estimation With a High-Bandwidth Angular Rate Sensor Change_Detection.m; AGATE: Adversarial Game Analysis for Tactical Evaluation; Ionospheric Simulation System for Satellite Observations and Global Assimilative; Modeling Experiments (ISOGAME); An Extensible, User- Modifiable Framework for Planning Activities; Mission Operations Center (MOC) - Precipitation Processing System (PPS) Interface Software System (MPISS); Automated 3D Damaged Cavity Model Builder for Lower Surface Acreage Tile on Orbiter; Mixed Linear/Square-Root Encoded Single-Slope Ramp Provides Low-Noise ADC with High Linearity for Focal Plane Arrays; RUSHMAPS: Real-Time Uploadable Spherical Harmonic Moment Analysis for Particle Spectrometers; Powered Descent Guidance with General Thrust-Pointing Constraints; X-Ray Detection and Processing Models for Spacecraft Navigation and Timing; and Extreme Ionizing-Radiation-Resistant Bacteriu
Visibility in underwater robotics: Benchmarking and single image dehazing
Dealing with underwater visibility is one of the most important challenges in autonomous underwater robotics. The light transmission in the water medium degrades images making the interpretation of the scene difficult and consequently compromising the whole intervention. This thesis contributes by analysing the impact of the underwater image degradation in commonly used vision algorithms through benchmarking. An online framework for underwater research that makes possible to analyse results under different conditions is presented. Finally, motivated by the results of experimentation with the developed framework, a deep learning solution is proposed capable of dehazing a degraded image in real time restoring the original colors of the image.Una de las dificultades más grandes de la robótica autónoma submarina es lidiar con la falta de visibilidad en imágenes submarinas. La transmisión de la luz en el agua degrada las imágenes dificultando el reconocimiento de objetos y en consecuencia la intervención. Ésta tesis se centra en el análisis del impacto de la degradación de las imágenes submarinas en algoritmos de visión a través de benchmarking, desarrollando un entorno de trabajo en la nube que permite analizar los resultados bajo diferentes condiciones. Teniendo en cuenta los resultados obtenidos con este entorno, se proponen métodos basados en técnicas de aprendizaje profundo para mitigar el impacto de la degradación de las imágenes en tiempo real introduciendo un paso previo que permita recuperar los colores originales
- …