3,840 research outputs found
Reasoning on Robot Knowledge from Discrete and Asynchronous Observations
Robot knowledge of the world is created from discrete and asynchronous events received from its perception components. Proper representation and maintenance of robot knowledge is crucial to enable the use of robot knowledge for planning, user-interaction, etc. This paper identifies some of the main issues related to the representation, maintenance and querying of robot knowledge based on discrete asynchronous events such as event-history management and synchronization, and introduces a language for simplifying developers’ job at making a suitable representation of robot knowledge
Deaf, Dumb, and Chatting Robots, Enabling Distributed Computation and Fault-Tolerance Among Stigmergic Robot
We investigate ways for the exchange of information (explicit communication)
among deaf and dumb mobile robots scattered in the plane. We introduce the use
of movement-signals (analogously to flight signals and bees waggle) as a mean
to transfer messages, enabling the use of distributed algorithms among the
robots. We propose one-to-one deterministic movement protocols that implement
explicit communication. We first present protocols for synchronous robots. We
begin with a very simple coding protocol for two robots. Based on on this
protocol, we provide one-to-one communication for any system of n \geq 2 robots
equipped with observable IDs that agree on a common direction (sense of
direction). We then propose two solutions enabling one-to-one communication
among anonymous robots. Since the robots are devoid of observable IDs, both
protocols build recognition mechanisms using the (weak) capabilities offered to
the robots. The first protocol assumes that the robots agree on a common
direction and a common handedness (chirality), while the second protocol
assumes chirality only. Next, we show how the movements of robots can provide
implicit acknowledgments in asynchronous systems. We use this result to design
asynchronous one-to-one communication with two robots only. Finally, we combine
this solution with the schemes developed in synchronous settings to fit the
general case of asynchronous one-to-one communication among any number of
robots. Our protocols enable the use of distributing algorithms based on
message exchanges among swarms of Stigmergic robots. Furthermore, they provides
robots equipped with means of communication to overcome faults of their
communication device
Using Bayesian Programming for Multisensor Multi-Target Tracking in Automative Applications
A prerequisite to the design of future Advanced Driver Assistance Systems for cars is a sensing system providing all the information required for high-level driving assistance tasks. Carsense is a European project whose purpose is to develop such a new sensing system. It will combine different sensors (laser, radar and video) and will rely on the fusion of the information coming from these sensors in order to achieve better accuracy, robustness and an increase of the information content. This paper demonstrates the interest of using
probabilistic reasoning techniques to address this challenging multi-sensor data fusion problem. The approach used is called Bayesian Programming. It is a general approach based on an implementation of the Bayesian theory. It was introduced rst to design robot control programs but its scope of application is much broader and it can be used whenever one has to deal with problems involving uncertain or incomplete knowledge
Getting Close Without Touching: Near-Gathering for Autonomous Mobile Robots
In this paper we study the Near-Gathering problem for a finite set of
dimensionless, deterministic, asynchronous, anonymous, oblivious and autonomous
mobile robots with limited visibility moving in the Euclidean plane in
Look-Compute-Move (LCM) cycles. In this problem, the robots have to get close
enough to each other, so that every robot can see all the others, without
touching (i.e., colliding with) any other robot. The importance of solving the
Near-Gathering problem is that it makes it possible to overcome the restriction
of having robots with limited visibility. Hence it allows to exploit all the
studies (the majority, actually) done on this topic in the unlimited visibility
setting. Indeed, after the robots get close enough to each other, they are able
to see all the robots in the system, a scenario that is similar to the one
where the robots have unlimited visibility.
We present the first (deterministic) algorithm for the Near-Gathering
problem, to the best of our knowledge, which allows a set of autonomous mobile
robots to nearly gather within finite time without ever colliding. Our
algorithm assumes some reasonable conditions on the input configuration (the
Near-Gathering problem is easily seen to be unsolvable in general). Further,
all the robots are assumed to have a compass (hence they agree on the "North"
direction), but they do not necessarily have the same handedness (hence they
may disagree on the clockwise direction).
We also show how the robots can detect termination, i.e., detect when the
Near-Gathering problem has been solved. This is crucial when the robots have to
perform a generic task after having nearly gathered. We show that termination
detection can be obtained even if the total number of robots is unknown to the
robots themselves (i.e., it is not a parameter of the algorithm), and robots
have no way to explicitly communicate.Comment: 25 pages, 8 fiugre
Certified Impossibility Results for Byzantine-Tolerant Mobile Robots
We propose a framework to build formal developments for robot networks using
the COQ proof assistant, to state and to prove formally various properties. We
focus in this paper on impossibility proofs, as it is natural to take advantage
of the COQ higher order calculus to reason about algorithms as abstract
objects. We present in particular formal proofs of two impossibility results
forconvergence of oblivious mobile robots if respectively more than one half
and more than one third of the robots exhibit Byzantine failures, starting from
the original theorems by Bouzid et al.. Thanks to our formalization, the
corresponding COQ developments are quite compact. To our knowledge, these are
the first certified (in the sense of formally proved) impossibility results for
robot networks
Temporal Relational Reasoning in Videos
Temporal relational reasoning, the ability to link meaningful transformations
of objects or entities over time, is a fundamental property of intelligent
species. In this paper, we introduce an effective and interpretable network
module, the Temporal Relation Network (TRN), designed to learn and reason about
temporal dependencies between video frames at multiple time scales. We evaluate
TRN-equipped networks on activity recognition tasks using three recent video
datasets - Something-Something, Jester, and Charades - which fundamentally
depend on temporal relational reasoning. Our results demonstrate that the
proposed TRN gives convolutional neural networks a remarkable capacity to
discover temporal relations in videos. Through only sparsely sampled video
frames, TRN-equipped networks can accurately predict human-object interactions
in the Something-Something dataset and identify various human gestures on the
Jester dataset with very competitive performance. TRN-equipped networks also
outperform two-stream networks and 3D convolution networks in recognizing daily
activities in the Charades dataset. Further analyses show that the models learn
intuitive and interpretable visual common sense knowledge in videos.Comment: camera-ready version for ECCV'1
- …