15,263 research outputs found
Realtime State Estimation with Tactile and Visual sensing. Application to Planar Manipulation
Accurate and robust object state estimation enables successful object
manipulation. Visual sensing is widely used to estimate object poses. However,
in a cluttered scene or in a tight workspace, the robot's end-effector often
occludes the object from the visual sensor. The robot then loses visual
feedback and must fall back on open-loop execution.
In this paper, we integrate both tactile and visual input using a framework
for solving the SLAM problem, incremental smoothing and mapping (iSAM), to
provide a fast and flexible solution. Visual sensing provides global pose
information but is noisy in general, whereas contact sensing is local, but its
measurements are more accurate relative to the end-effector. By combining them,
we aim to exploit their advantages and overcome their limitations. We explore
the technique in the context of a pusher-slider system. We adapt iSAM's
measurement cost and motion cost to the pushing scenario, and use an
instrumented setup to evaluate the estimation quality with different object
shapes, on different surface materials, and under different contact modes
Reasoning About Liquids via Closed-Loop Simulation
Simulators are powerful tools for reasoning about a robot's interactions with
its environment. However, when simulations diverge from reality, that reasoning
becomes less useful. In this paper, we show how to close the loop between
liquid simulation and real-time perception. We use observations of liquids to
correct errors when tracking the liquid's state in a simulator. Our results
show that closed-loop simulation is an effective way to prevent large
divergence between the simulated and real liquid states. As a direct
consequence of this, our method can enable reasoning about liquids that would
otherwise be infeasible due to large divergences, such as reasoning about
occluded liquid.Comment: Robotics: Science & Systems (RSS), July 12-16, 2017. Cambridge, MA,
US
Do-It-Yourself Single Camera 3D Pointer Input Device
We present a new algorithm for single camera 3D reconstruction, or 3D input
for human-computer interfaces, based on precise tracking of an elongated
object, such as a pen, having a pattern of colored bands. To configure the
system, the user provides no more than one labelled image of a handmade
pointer, measurements of its colored bands, and the camera's pinhole projection
matrix. Other systems are of much higher cost and complexity, requiring
combinations of multiple cameras, stereocameras, and pointers with sensors and
lights. Instead of relying on information from multiple devices, we examine our
single view more closely, integrating geometric and appearance constraints to
robustly track the pointer in the presence of occlusion and distractor objects.
By probing objects of known geometry with the pointer, we demonstrate
acceptable accuracy of 3D localization.Comment: 8 pages, 6 figures, 2018 15th Conference on Computer and Robot Visio
Autonomy Infused Teleoperation with Application to BCI Manipulation
Robot teleoperation systems face a common set of challenges including
latency, low-dimensional user commands, and asymmetric control inputs. User
control with Brain-Computer Interfaces (BCIs) exacerbates these problems
through especially noisy and erratic low-dimensional motion commands due to the
difficulty in decoding neural activity. We introduce a general framework to
address these challenges through a combination of computer vision, user intent
inference, and arbitration between the human input and autonomous control
schemes. Adjustable levels of assistance allow the system to balance the
operator's capabilities and feelings of comfort and control while compensating
for a task's difficulty. We present experimental results demonstrating
significant performance improvement using the shared-control assistance
framework on adapted rehabilitation benchmarks with two subjects implanted with
intracortical brain-computer interfaces controlling a seven degree-of-freedom
robotic manipulator as a prosthetic. Our results further indicate that shared
assistance mitigates perceived user difficulty and even enables successful
performance on previously infeasible tasks. We showcase the extensibility of
our architecture with applications to quality-of-life tasks such as opening a
door, pouring liquids from containers, and manipulation with novel objects in
densely cluttered environments
Reinforcement Learning With Temporal Logic Rewards
Reinforcement learning (RL) depends critically on the choice of reward
functions used to capture the de- sired behavior and constraints of a robot.
Usually, these are handcrafted by a expert designer and represent heuristics
for relatively simple tasks. Real world applications typically involve more
complex tasks with rich temporal and logical structure. In this paper we take
advantage of the expressive power of temporal logic (TL) to specify complex
rules the robot should follow, and incorporate domain knowledge into learning.
We propose Truncated Linear Temporal Logic (TLTL) as specifications language,
that is arguably well suited for the robotics applications, together with
quantitative semantics, i.e., robustness degree. We propose a RL approach to
learn tasks expressed as TLTL formulae that uses their associated robustness
degree as reward functions, instead of the manually crafted heuristics trying
to capture the same specifications. We show in simulated trials that learning
is faster and policies obtained using the proposed approach outperform the ones
learned using heuristic rewards in terms of the robustness degree, i.e., how
well the tasks are satisfied. Furthermore, we demonstrate the proposed RL
approach in a toast-placing task learned by a Baxter robot
- …