11 research outputs found
Exploiting semantic knowledge in swarm robotic systems for target searching
Robotic systems have long been used for search and rescue tasks in hazardous environments. The prevailing solutions which utilize delicate units for sensing and positioning show their reliance on globalized information when multiple robots are deployed. To employ multiple robots (especially swarm robots in this thesis) in a searching task, the local perceptual ability and local communication range demand a new strategy for environmental information recording and exchanging, to promote searching efficiencies of the robots.
This thesis presents a semantic knowledge-based mechanism for environmental information storage and communication in swarm robotic systems. Human expert knowledge about the environment can be utilized by such a mechanism for promoting searching efficiency. Robots without the knowledge provided in advance could learn knowledge in a task-oriented way, and help other robots in the swarm find the target faster by sharing the knowledge
Robot Navigation in Unseen Spaces using an Abstract Map
Human navigation in built environments depends on symbolic spatial
information which has unrealised potential to enhance robot navigation
capabilities. Information sources such as labels, signs, maps, planners, spoken
directions, and navigational gestures communicate a wealth of spatial
information to the navigators of built environments; a wealth of information
that robots typically ignore. We present a robot navigation system that uses
the same symbolic spatial information employed by humans to purposefully
navigate in unseen built environments with a level of performance comparable to
humans. The navigation system uses a novel data structure called the abstract
map to imagine malleable spatial models for unseen spaces from spatial symbols.
Sensorimotor perceptions from a robot are then employed to provide purposeful
navigation to symbolic goal locations in the unseen environment. We show how a
dynamic system can be used to create malleable spatial models for the abstract
map, and provide an open source implementation to encourage future work in the
area of symbolic navigation. Symbolic navigation performance of humans and a
robot is evaluated in a real-world built environment. The paper concludes with
a qualitative analysis of human navigation strategies, providing further
insights into how the symbolic navigation capabilities of robots in unseen
built environments can be improved in the future.Comment: 15 pages, published in IEEE Transactions on Cognitive and
Developmental Systems (http://doi.org/10.1109/TCDS.2020.2993855), see
https://btalb.github.io/abstract_map/ for access to softwar
We Need New Scientific Languages to Harness the Complexity of Cognitive Development
International audienceTo study the behavioral and cognitive structures that form in the childhood of animals, cognitive science has for a long time organized theories by trying to identify what is innate and what is learned, and how nature and nurture interact to influence each other. However, recent advances in developmental sciences are strongly questioning the relevance of this scientific framing based on the nature/nurture divide, including arguments that rely on their interaction, as they are putting the focus on states and causal explanations while key dimensions of development are processes of change and pattern formation. However, as John Spencer, Mark Blumberg and David Shenk argue in the dialogue featured in this newsletter, the nativist/ empiricist perspective, and its associated concepts , are still playing a central role both in cognitive science and in the way it is explained and understood by the larger public. Responding to this dialog, several key researchers from cognitive and developmental sciences, as well as biology, propose their responses and views
A competence-performance based model to develop a syntactic language for artificial agents
The hypothesis of language use is an attractive theory in order to explain how natural languages evolve and develop in social populations. In this paper we present a model partially based on the idea of language games, so that a group of artificial agents are able to produce and share a symbolic language with syntactic structure. Grammatical structure is induced by grammatical evolution of stochastic regular grammars with learning capabilities, while language development is refined by means of language games where the agents apply on-line probabilistic reinforcement learning. Within this framework, the model adapts the concepts of competence and performance in language, as they have been proposed in some linguistic theories. The first experiments in this article have been organized around the linguistic description of visual scenes with the possibility of changing the referential situations. A second and more complicated experimental setting is also analyzed, where linguistic descriptions are enforced to keep word order constraints.The second author has been supported by the Spanish Ministry of Science under contract ENE2014-56126-C2-2-R (AOPRIN-SOL)
Apprentissage simultané d'une tâche nouvelle et de l'interprétation de signaux sociaux d'un humain en robotique
This thesis investigates how a machine can be taught a new task from unlabeled humaninstructions, which is without knowing beforehand how to associate the human communicative signals withtheir meanings. The theoretical and empirical work presented in this thesis provides means to createcalibration free interactive systems, which allow humans to interact with machines, from scratch, using theirown preferred teaching signals. It therefore removes the need for an expert to tune the system for eachspecific user, which constitutes an important step towards flexible personalized teaching interfaces, a key forthe future of personal robotics.Our approach assumes the robot has access to a limited set of task hypotheses, which include the task theuser wants to solve. Our method consists of generating interpretation hypotheses of the teaching signalswith respect to each hypothetic task. By building a set of hypothetic interpretation, i.e. a set of signallabelpairs for each task, the task the user wants to solve is the one that explains better the history of interaction.We consider different scenarios, including a pick and place robotics experiment with speech as the modalityof interaction, and a navigation task in a brain computer interaction scenario. In these scenarios, a teacherinstructs a robot to perform a new task using initially unclassified signals, whose associated meaning can bea feedback (correct/incorrect) or a guidance (go left, right, up, ...). Our results show that a) it is possible tolearn the meaning of unlabeled and noisy teaching signals, as well as a new task at the same time, and b) itis possible to reuse the acquired knowledge about the teaching signals for learning new tasks faster. Wefurther introduce a planning strategy that exploits uncertainty from the task and the signals' meanings toallow more efficient learning sessions. We present a study where several real human subjects controlsuccessfully a virtual device using their brain and without relying on a calibration phase. Our system identifies, from scratch, the target intended by the user as well as the decoder of brain signals.Based on this work, but from another perspective, we introduce a new experimental setup to study howhumans behave in asymmetric collaborative tasks. In this setup, two humans have to collaborate to solve atask but the channels of communication they can use are constrained and force them to invent and agree ona shared interaction protocol in order to solve the task. These constraints allow analyzing how acommunication protocol is progressively established through the interplay and history of individual actions.Cette thèse s'intéresse à un problème logique dont les enjeux théoriques et pratiques sont multiples. De manière simple, il peut être présenté ainsi : imaginez que vous êtes dans un labyrinthe, dont vous connaissez toutes les routes menant à chacune des portes de sortie. Derrière l'une de ces portes se trouve un trésor, mais vous n'avez le droit d'ouvrir qu'une seule porte. Un vieil homme habitant le labyrinthe connaît la bonne sortie et se propose alors de vous aider à l'identifier. Pour cela, il vous indiquera la direction à prendre à chaque intersection. Malheureusement, cet homme ne parle pas votre langue, et les mots qu'il utilise pour dire ``droite'' ou ``gauche'' vous sont inconnus. Est-il possible de trouver le trésor et de comprendre l'association entre les mots du vieil homme et leurs significations ? Ce problème, bien qu'en apparence abstrait, est relié à des problématiques concrètes dans le domaine de l'interaction homme-machine. Remplaçons le vieil homme par un utilisateur souhaitant guider un robot vers une sortie spécifique du labyrinthe. Ce robot ne sait pas en avance quelle est la bonne sortie mais il sait où se trouvent chacune des portes et comment s'y rendre. Imaginons maintenant que ce robot ne comprenne pas a priori le langage de l'humain; en effet, il est très difficile de construire un robot à même de comprendre parfaitement chaque langue, accent et préférence de chacun. Il faudra alors que le robot apprenne l'association entre les mots de l'utilisateur et leur sens, tout en réalisant la tâche que l'humain lui indique (i.e.trouver la bonne porte). Une autre façon de décrire ce problème est de parler d'auto-calibration. En effet, le résoudre reviendrait à créer des interfaces ne nécessitant pas de phase de calibration car la machine pourrait s'adapter,automatiquement et pendant l'interaction, à différentes personnes qui ne parlent pas la même langue ou qui n'utilisent pas les mêmes mots pour dire la même chose. Cela veut aussi dire qu'il serait facile de considérer d’autres modalités d'interaction (par exemple des gestes, des expressions faciales ou des ondes cérébrales). Dans cette thèse, nous présentons une solution à ce problème. Nous appliquons nos algorithmes à deux exemples typiques de l'interaction homme robot et de l'interaction cerveau machine: une tâche d'organisation d'une série d'objets selon les préférences de l'utilisateur qui guide le robot par la voix, et une tâche de déplacement sur une grille guidé par les signaux cérébraux de l'utilisateur. Ces dernières expériences ont été faites avec des utilisateurs réels. Nos résultats démontrent expérimentalement que notre approche est fonctionnelle et permet une utilisation pratique d’une interface sans calibration préalable
Beyond an Anthropomorphic Template
In our endeavours to explore all possible forms that non-terrestrial communication may encompass, eventually we must throw off our anthropomorphic bias and investigate the implications of post-biological intelligence on SETI search strategies. In the event a candidate signal is detected, our initial categorization and assessment will focus on analyzing its comprising constructs, to ascertain whether information content is present; a fundamental signature of intelligence. To ensure our systems are capable of encompassing such intelligent communicators, we need to investigate both the contrasts and similarities of such non-biological communication and how this extends the known spectrum. In this paper, we begin to investigate the likely signatures and contrasting structures such non-biological communicators may present to us, across a range of known machine communication phenomena, and discuss how such contrasting forms of information exchange can aid, extend and refine our detection and decipherment capabilities
Communication between Lingodroids with different cognitive capabilities
Previous studies have shown how Lingodroids, language learning mobile robots, learn terms for space and time, connecting their personal maps of the world to a publically shared language. One caveat of previous studies was that the robots shared the same cognitive architecture, identical in all respects from sensors to mapping systems. In this paper we investigate the question of how terms for space can be developed between robots that have fundamentally different sensors and spatial representations. In the real world, communication needs to occur between agents that have different embodiment and cognitive capabilities, including different sensors, different representations of the world, and different species (including humans). The novel aspects of these studies is that one robot uses a forward facing camera to estimate appearance and uses a biologically inspired continuous attractor network to generate a topological map; the other robot uses a laser scanner to estimate range and uses a probabilistic filter approach to generate an occupancy grid. The robots hold conversations in different locations to establish a shared language. Despite their different ways of sensing and mapping the world, the robots are able to create coherent lexicons for the space around them