314 research outputs found
Tactile Ergodic Control Using Diffusion and Geometric Algebra
Continuous physical interaction between robots and their environment is a
requirement in many industrial and household tasks, such as sanding and
cleaning. Due to the complex tactile information, these tasks are notoriously
difficult to model and to sense. In this article, we introduce a closed-loop
control method that is constrained to surfaces. The applications that we target
have in common that they can be represented by probability distributions on the
surface that correlate to the time the robot should spend in a region. These
surfaces can easily be captured jointly with the target distributions using
coloured point clouds. We present the extension of an ergodic control approach
that can be used with point clouds, based on heat equation-driven area coverage
(HEDAC). Our method enables closed-loop exploration by measuring the actual
coverage using vision. Unlike existing approaches, we approximate the potential
field from non-stationary diffusion using spectral acceleration, which does not
require complex preprocessing steps and achieves real-time closed-loop control
frequencies. We exploit geometric algebra to stay in contact with the target
surface by tracking a line while simultaneously exerting a desired force along
that line. Our approach is suitable for fully autonomous and human-robot
interaction settings where the robot can either directly measure the coverage
of the target with its sensors or by being guided online by markings or
annotations of a human expert. We tested the performance of the approach in
kinematic simulation using point clouds, ranging from the Stanford bunny to a
variety of kitchen utensils. Our real-world experiments demonstrate that the
proposed approach can successfully be used to wash kitchenware with curved
surfaces, by cleaning the dirt detected by vision in an online manner. Website:
https://geometric-algebra.tobiloew.ch/tactile_ergodic_controlComment: Submitted to the special issue for IEEE Transactions on Robotics
(T-RO) on Tactile Robotic
Information Surfing for Model-driven Radiation Mapping
In this report we develop a control scheme to coordinate a group of mobile sensors for radiation mapping of a given planar polygon region. The control algorithm is based on the concept of information surfing, where navigation is done by means of following information gradients, taking into account sensing performance as well as inter-robot communication range limitations. The control scheme provably steers mobile sensors to locations at which they maximize the information content of their measurement data, and the asymptotic properties of our information metric with respect to time ensures that no local information metric extremum traps the sensors indefinitely. In addition, the inherent synergy of the mobile sensor group facilitates the temporal erosion of such extremum configurations. Information surfing allows for reactive mobile sensor network behavior and adaptation to environmental changes, as well as human retasking
Decentralized Learning With Limited Communications for Multi-robot Coverage of Unknown Spatial Fields
This paper presents an algorithm for a team of mobile robots to
simultaneously learn a spatial field over a domain and spatially distribute
themselves to optimally cover it. Drawing from previous approaches that
estimate the spatial field through a centralized Gaussian process, this work
leverages the spatial structure of the coverage problem and presents a
decentralized strategy where samples are aggregated locally by establishing
communications through the boundaries of a Voronoi partition. We present an
algorithm whereby each robot runs a local Gaussian process calculated from its
own measurements and those provided by its Voronoi neighbors, which are
incorporated into the individual robot's Gaussian process only if they provide
sufficiently novel information. The performance of the algorithm is evaluated
in simulation and compared with centralized approaches.Comment: Accepted IROS 202
Task-driven multi-formation control for coordinated UAV/UGV ISR missions
The report describes the development of a theoretical framework for coordination and control of combined teams of UAVs and UGVs for coordinated ISR missions. We consider the mission as a composition of an ordered sequence of subtasks, each to be performed by a different team. We design continuous cooperative controllers that enable each team to perform a given subtask and we develop a discrete strategy for interleaving the action of teams on different subtasks. The overall multi-agent coordination architecture is captured by a hybrid automaton, stability is studied using Lyapunov tools, and performance is evaluated through numerical simulations
Advancing Robot Autonomy for Long-Horizon Tasks
Autonomous robots have real-world applications in diverse fields, such as
mobile manipulation and environmental exploration, and many such tasks benefit
from a hands-off approach in terms of human user involvement over a long task
horizon. However, the level of autonomy achievable by a deployment is limited
in part by the problem definition or task specification required by the system.
Task specifications often require technical, low-level information that is
unintuitive to describe and may result in generic solutions, burdening the user
technically both before and after task completion. In this thesis, we aim to
advance task specification abstraction toward the goal of increasing robot
autonomy in real-world scenarios. We do so by tackling problems that address
several different angles of this goal. First, we develop a way for the
automatic discovery of optimal transition points between subtasks in the
context of constrained mobile manipulation, removing the need for the human to
hand-specify these in the task specification. We further propose a way to
automatically describe constraints on robot motion by using demonstrated data
as opposed to manually-defined constraints. Then, within the context of
environmental exploration, we propose a flexible task specification framework,
requiring just a set of quantiles of interest from the user that allows the
robot to directly suggest locations in the environment for the user to study.
We next systematically study the effect of including a robot team in the task
specification and show that multirobot teams have the ability to improve
performance under certain specification conditions, including enabling
inter-robot communication. Finally, we propose methods for a communication
protocol that autonomously selects useful but limited information to share with
the other robots.Comment: PhD dissertation. 160 page
Asynchronous Collaborative Autoscanning with Mode Switching for Multi-Robot Scene Reconstruction
When conducting autonomous scanning for the online reconstruction of unknown
indoor environments, robots have to be competent at exploring scene structure
and reconstructing objects with high quality. Our key observation is that
different tasks demand specialized scanning properties of robots: rapid moving
speed and far vision for global exploration and slow moving speed and narrow
vision for local object reconstruction, which are referred as two different
scanning modes: explorer and reconstructor, respectively. When requiring
multiple robots to collaborate for efficient exploration and fine-grained
reconstruction, the questions on when to generate and how to assign those tasks
should be carefully answered. Therefore, we propose a novel asynchronous
collaborative autoscanning method with mode switching, which generates two
kinds of scanning tasks with associated scanning modes, i.e., exploration task
with explorer mode and reconstruction task with reconstructor mode, and assign
them to the robots to execute in an asynchronous collaborative manner to highly
boost the scanning efficiency and reconstruction quality. The task assignment
is optimized by solving a modified Multi-Depot Multiple Traveling Salesman
Problem (MDMTSP). Moreover, to further enhance the collaboration and increase
the efficiency, we propose a task-flow model that actives the task generation
and assignment process immediately when any of the robots finish all its tasks
with no need to wait for all other robots to complete the tasks assigned in the
previous iteration. Extensive experiments have been conducted to show the
importance of each key component of our method and the superiority over
previous methods in scanning efficiency and reconstruction quality.Comment: 13pages, 12 figures, Conference: SIGGRAPH Asia 202
- …