53 research outputs found
Incremental vision-based topological SLAM
Published versio
Real-time visual loop-closure detection
Published versio
IOP PUBLISHING
Artificial evolution of the morphology and kinematics in a flapping-wing mini-UA
DREAM Architecture: a Developmental Approach to Open-Ended Learning in Robotics
Robots are still limited to controlled conditions, that the robot designer
knows with enough details to endow the robot with the appropriate models or
behaviors. Learning algorithms add some flexibility with the ability to
discover the appropriate behavior given either some demonstrations or a reward
to guide its exploration with a reinforcement learning algorithm. Reinforcement
learning algorithms rely on the definition of state and action spaces that
define reachable behaviors. Their adaptation capability critically depends on
the representations of these spaces: small and discrete spaces result in fast
learning while large and continuous spaces are challenging and either require a
long training period or prevent the robot from converging to an appropriate
behavior. Beside the operational cycle of policy execution and the learning
cycle, which works at a slower time scale to acquire new policies, we introduce
the redescription cycle, a third cycle working at an even slower time scale to
generate or adapt the required representations to the robot, its environment
and the task. We introduce the challenges raised by this cycle and we present
DREAM (Deferred Restructuring of Experience in Autonomous Machines), a
developmental cognitive architecture to bootstrap this redescription process
stage by stage, build new state representations with appropriate motivations,
and transfer the acquired knowledge across domains or tasks or even across
robots. We describe results obtained so far with this approach and end up with
a discussion of the questions it raises in Neuroscience
Evolutionary optimisation of neural network models for fish collective behaviours in mixed groups of robots and zebrafish
Animal and robot social interactions are interesting both for ethological
studies and robotics. On the one hand, the robots can be tools and models to
analyse animal collective behaviours, on the other hand, the robots and their
artificial intelligence are directly confronted and compared to the natural
animal collective intelligence. The first step is to design robots and their
behavioural controllers that are capable of socially interact with animals.
Designing such behavioural bio-mimetic controllers remains an important
challenge as they have to reproduce the animal behaviours and have to be
calibrated on experimental data. Most animal collective behavioural models are
designed by modellers based on experimental data. This process is long and
costly because it is difficult to identify the relevant behavioural features
that are then used as a priori knowledge in model building. Here, we want to
model the fish individual and collective behaviours in order to develop robot
controllers. We explore the use of optimised black-box models based on
artificial neural networks (ANN) to model fish behaviours. While the ANN may
not be biomimetic but rather bio-inspired, they can be used to link perception
to motor responses. These models are designed to be implementable as robot
controllers to form mixed-groups of fish and robots, using few a priori
knowledge of the fish behaviours. We present a methodology with multilayer
perceptron or echo state networks that are optimised through evolutionary
algorithms to model accurately the fish individual and collective behaviours in
a bounded rectangular arena. We assess the biomimetism of the generated models
and compare them to the fish experimental behaviours.Comment: 10 pages, 4 figure
Accessible Cultural Heritage through Explainable Artificial Intelligence
International audienceEthics Guidelines for Trustworthy AI advocate for AI technology that is, among other things, more inclusive. Explainable AI (XAI) aims at making state of the art opaque models more transparent, and defends AI-based outcomes endorsed with a rationale explanation, i.e., an explanation that has as target the non-technical users. XAI and Responsible AI principles defend the fact that the audience expertise should be included in the evaluation of explainable AI systems. However, AI has not yet reached all public and audiences , some of which may need it the most. One example of domain where accessibility has not much been influenced by the latest AI advances is cultural heritage. We propose including minorities as special user and evaluator of the latest XAI techniques. In order to define catalytic scenarios for collaboration and improved user experience, we pose some challenges and research questions yet to address by the latest AI models likely to be involved in such synergy
- âŠ