21,080 research outputs found
Reasoning About Liquids via Closed-Loop Simulation
Simulators are powerful tools for reasoning about a robot's interactions with
its environment. However, when simulations diverge from reality, that reasoning
becomes less useful. In this paper, we show how to close the loop between
liquid simulation and real-time perception. We use observations of liquids to
correct errors when tracking the liquid's state in a simulator. Our results
show that closed-loop simulation is an effective way to prevent large
divergence between the simulated and real liquid states. As a direct
consequence of this, our method can enable reasoning about liquids that would
otherwise be infeasible due to large divergences, such as reasoning about
occluded liquid.Comment: Robotics: Science & Systems (RSS), July 12-16, 2017. Cambridge, MA,
US
Fast Monte-Carlo Localization on Aerial Vehicles using Approximate Continuous Belief Representations
Size, weight, and power constrained platforms impose constraints on
computational resources that introduce unique challenges in implementing
localization algorithms. We present a framework to perform fast localization on
such platforms enabled by the compressive capabilities of Gaussian Mixture
Model representations of point cloud data. Given raw structural data from a
depth sensor and pitch and roll estimates from an on-board attitude reference
system, a multi-hypothesis particle filter localizes the vehicle by exploiting
the likelihood of the data originating from the mixture model. We demonstrate
analysis of this likelihood in the vicinity of the ground truth pose and detail
its utilization in a particle filter-based vehicle localization strategy, and
later present results of real-time implementations on a desktop system and an
off-the-shelf embedded platform that outperform localization results from
running a state-of-the-art algorithm on the same environment
Single camera pose estimation using Bayesian filtering and Kinect motion priors
Traditional approaches to upper body pose estimation using monocular vision
rely on complex body models and a large variety of geometric constraints. We
argue that this is not ideal and somewhat inelegant as it results in large
processing burdens, and instead attempt to incorporate these constraints
through priors obtained directly from training data. A prior distribution
covering the probability of a human pose occurring is used to incorporate
likely human poses. This distribution is obtained offline, by fitting a
Gaussian mixture model to a large dataset of recorded human body poses, tracked
using a Kinect sensor. We combine this prior information with a random walk
transition model to obtain an upper body model, suitable for use within a
recursive Bayesian filtering framework. Our model can be viewed as a mixture of
discrete Ornstein-Uhlenbeck processes, in that states behave as random walks,
but drift towards a set of typically observed poses. This model is combined
with measurements of the human head and hand positions, using recursive
Bayesian estimation to incorporate temporal information. Measurements are
obtained using face detection and a simple skin colour hand detector, trained
using the detected face. The suggested model is designed with analytical
tractability in mind and we show that the pose tracking can be
Rao-Blackwellised using the mixture Kalman filter, allowing for computational
efficiency while still incorporating bio-mechanical properties of the upper
body. In addition, the use of the proposed upper body model allows reliable
three-dimensional pose estimates to be obtained indirectly for a number of
joints that are often difficult to detect using traditional object recognition
strategies. Comparisons with Kinect sensor results and the state of the art in
2D pose estimation highlight the efficacy of the proposed approach.Comment: 25 pages, Technical report, related to Burke and Lasenby, AMDO 2014
conference paper. Code sample: https://github.com/mgb45/SignerBodyPose Video:
https://www.youtube.com/watch?v=dJMTSo7-uF
3D Tracking Using Multi-view Based Particle Filters
Visual surveillance and monitoring of indoor environments using multiple cameras has become a field of great activity in computer vision. Usual 3D tracking and positioning systems rely on several independent 2D tracking modules applied over individual camera streams, fused using geometrical relationships across cameras. As 2D tracking systems suffer inherent difficulties due to point of view limitations (perceptually similar foreground and background regions causing fragmentation of moving objects, occlusions), 3D tracking based on partially erroneous 2D tracks are likely to fail when handling multiple-people interaction. To overcome this problem, this paper proposes a Bayesian framework for combining 2D low-level cues from multiple cameras directly into the 3D world through 3D Particle Filters. This method allows to estimate the probability of a certain volume being occupied by a moving object, and thus to segment and track multiple people across the monitored area. The proposed method is developed on the basis of simple, binary 2D moving region segmentation on each camera, considered as different state observations. In addition, the method is proved well suited for integrating additional 2D low-level cues to increase system robustness to occlusions: in this line, a naïve color-based (HSI) appearance model has been integrated, resulting in clear performance improvements when dealing with complex scenarios
Multi-camera Realtime 3D Tracking of Multiple Flying Animals
Automated tracking of animal movement allows analyses that would not
otherwise be possible by providing great quantities of data. The additional
capability of tracking in realtime - with minimal latency - opens up the
experimental possibility of manipulating sensory feedback, thus allowing
detailed explorations of the neural basis for control of behavior. Here we
describe a new system capable of tracking the position and body orientation of
animals such as flies and birds. The system operates with less than 40 msec
latency and can track multiple animals simultaneously. To achieve these
results, a multi target tracking algorithm was developed based on the Extended
Kalman Filter and the Nearest Neighbor Standard Filter data association
algorithm. In one implementation, an eleven camera system is capable of
tracking three flies simultaneously at 60 frames per second using a gigabit
network of nine standard Intel Pentium 4 and Core 2 Duo computers. This
manuscript presents the rationale and details of the algorithms employed and
shows three implementations of the system. An experiment was performed using
the tracking system to measure the effect of visual contrast on the flight
speed of Drosophila melanogaster. At low contrasts, speed is more variable and
faster on average than at high contrasts. Thus, the system is already a useful
tool to study the neurobiology and behavior of freely flying animals. If
combined with other techniques, such as `virtual reality'-type computer
graphics or genetic manipulation, the tracking system would offer a powerful
new way to investigate the biology of flying animals.Comment: pdfTeX using libpoppler 3.141592-1.40.3-2.2 (Web2C 7.5.6), 18 pages
with 9 figure
Markerless visual servoing on unknown objects for humanoid robot platforms
To precisely reach for an object with a humanoid robot, it is of central
importance to have good knowledge of both end-effector, object pose and shape.
In this work we propose a framework for markerless visual servoing on unknown
objects, which is divided in four main parts: I) a least-squares minimization
problem is formulated to find the volume of the object graspable by the robot's
hand using its stereo vision; II) a recursive Bayesian filtering technique,
based on Sequential Monte Carlo (SMC) filtering, estimates the 6D pose
(position and orientation) of the robot's end-effector without the use of
markers; III) a nonlinear constrained optimization problem is formulated to
compute the desired graspable pose about the object; IV) an image-based visual
servo control commands the robot's end-effector toward the desired pose. We
demonstrate effectiveness and robustness of our approach with extensive
experiments on the iCub humanoid robot platform, achieving real-time
computation, smooth trajectories and sub-pixel precisions
- …