1,649 research outputs found
Decentralization of Multiagent Policies by Learning What to Communicate
Effective communication is required for teams of robots to solve
sophisticated collaborative tasks. In practice it is typical for both the
encoding and semantics of communication to be manually defined by an expert;
this is true regardless of whether the behaviors themselves are bespoke,
optimization based, or learned. We present an agent architecture and training
methodology using neural networks to learn task-oriented communication
semantics based on the example of a communication-unaware expert policy. A
perimeter defense game illustrates the system's ability to handle dynamically
changing numbers of agents and its graceful degradation in performance as
communication constraints are tightened or the expert's observability
assumptions are broken.Comment: 7 page
Synergy-Based Hand Pose Sensing: Optimal Glove Design
In this paper we study the problem of improving human hand pose sensing
device performance by exploiting the knowledge on how humans most frequently
use their hands in grasping tasks. In a companion paper we studied the problem
of maximizing the reconstruction accuracy of the hand pose from partial and
noisy data provided by any given pose sensing device (a sensorized "glove")
taking into account statistical a priori information. In this paper we consider
the dual problem of how to design pose sensing devices, i.e. how and where to
place sensors on a glove, to get maximum information about the actual hand
posture. We study the continuous case, whereas individual sensing elements in
the glove measure a linear combination of joint angles, the discrete case,
whereas each measure corresponds to a single joint angle, and the most general
hybrid case, whereas both continuous and discrete sensing elements are
available. The objective is to provide, for given a priori information and
fixed number of measurements, the optimal design minimizing in average the
reconstruction error. Solutions relying on the geometrical synergy definition
as well as gradient flow-based techniques are provided. Simulations of
reconstruction performance show the effectiveness of the proposed optimal
design.Comment: Submitted to International Journal of Robotics Research 201
Linear Dynamic Modeling of Parallel Kinematic Manipulators from Observable Kinematic Elements.
International audienceThis paper presents a linear method for kinematic and dynamic modeling of parallel kinematic manipulators. This method is simple, compact and clear. One can write all the equations from the beginning till the end with pen and paper. It is thus well suited to mechanical understanding and computer implementation. We can apply it to many parallel robots. This method relies on a body-oriented representation of observable rectilinear kinematic structures (kinematic elements) which form the robot legs
Voronoi-Based Coverage Control of Heterogeneous Disk-Shaped Robots
In distributed mobile sensing applications, networks of agents that are heterogeneous respecting both actuation as well as body and sensory footprint are often modelled by recourse to power diagrams — generalized Voronoi diagrams with additive weights. In this paper we adapt the body power diagram to introduce its “free subdiagram,” generating a vector field planner that solves the combined sensory coverage and collision avoidance problem via continuous evaluation of an associated constrained optimization problem. We propose practical extensions (a heuristic congestion manager that speeds convergence and a lift of the point particle controller to the more practical differential drive kinematics) that maintain the convergence and collision guarantees.For more information: Kod*la
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Real-Time Dense Stereo Matching With ELAS on FPGA Accelerated Embedded Devices
For many applications in low-power real-time robotics, stereo cameras are the
sensors of choice for depth perception as they are typically cheaper and more
versatile than their active counterparts. Their biggest drawback, however, is
that they do not directly sense depth maps; instead, these must be estimated
through data-intensive processes. Therefore, appropriate algorithm selection
plays an important role in achieving the desired performance characteristics.
Motivated by applications in space and mobile robotics, we implement and
evaluate a FPGA-accelerated adaptation of the ELAS algorithm. Despite offering
one of the best trade-offs between efficiency and accuracy, ELAS has only been
shown to run at 1.5-3 fps on a high-end CPU. Our system preserves all
intriguing properties of the original algorithm, such as the slanted plane
priors, but can achieve a frame rate of 47fps whilst consuming under 4W of
power. Unlike previous FPGA based designs, we take advantage of both components
on the CPU/FPGA System-on-Chip to showcase the strategy necessary to accelerate
more complex and computationally diverse algorithms for such low power,
real-time systems.Comment: 8 pages, 7 figures, 2 table
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
- …