412 research outputs found
A Comparison of Different Cognitive Paradigms Using Simple Animats in a Virtual Laboratory, with Implications to the Notion of Cognition
In this thesis I present a virtual laboratory which implements five different models for controlling animats: a rule-based system, a behaviour-based system, a concept-based system, a neural network, and a Braitenberg architecture. Through different experiments, I compare the performance of the models and conclude that there is no best model, since different models are better for different things in different contexts. The models I chose, although quite simple, represent different approaches for studying cognition. Using the results as an empirical philosophical aid, I note that there is no best approach for studying cognition, since different approaches have all advantages and disadvantages, because they study different aspects of cognition from different contexts. This has implications for current debates on proper approaches for cognition: all approaches are a bit proper, but none will be proper enough. I draw remarks on the notion of cognition abstracting from all the approaches used to study it, and propose a simple classification for different types of cognition
Using Reinforcement Learning to Attenuate for Stochasticity in Robot Navigation Controllers
International audienceBraitenberg vehicles are bio-inspired controllers for sensor-based local navigation of wheeled robots that have been used in multiple real world robotic implementations. The common approach to implement such non-linear control mechanisms is through neural networks connecting sensing to motor action, yet tuning the weights to obtain appropriate closed-loop navigation behaviours can be very challenging. Standard approaches used hand tuned spiking or recurrent neural networks, or learnt the weights of feedforward networks using evolutionary approaches. Recently, Reinforcement Learning has been used to learn neural controllers for simulated Braitenberg vehicle 3a-a bio-inspired model of target seeking for wheeled robots-under the assumption of noiseless sensors. Real sensors, however, are subject to different levels of noise, and multiple works have shown that Braitenberg vehicles work even on outdoor robots, demonstrating that these control mechanisms work in harsh and dynamic environments. This paper shows that a robust neural controller for Braitenberg vehicle 3a can be learnt using policy gradient reinforcement learning in scenarios where sensor noise plays a non negligible role. The learnt controller is robust and tries to attenuate the effects of noise in the closed-loop navigation behaviour of the simulated stochastic vehicle. We compare the neural controller learnt using Reinforcement Learning with a simple hand tuned controller and show how the neural control mechanism outperforms a naïve controller. Results are illustrated through computer simulations of the closed-loop stochastic system
Neuroethology, Computational
Over the past decade, a number of neural network researchers have used the term computational neuroethology to describe a specific approach to neuroethology. Neuroethology is the study of the neural mechanisms underlying the generation of behavior in animals, and hence it lies at the intersection of neuroscience (the study of nervous systems) and ethology (the study of animal behavior); for an introduction to neuroethology, see Simmons and Young (1999). The definition of computational neuroethology is very similar, but is not quite so dependent on studying animals: animals just happen to be biological autonomous agents. But there are also non-biological autonomous agents such as some types of robots, and some types of simulated embodied agents operating in virtual worlds. In this context, autonomous agents are self-governing entities capable of operating (i.e., coordinating perception and action) for extended periods of time in environments that are complex, uncertain, and dynamic. Thus, computational neuroethology can be characterised as the attempt to analyze the computational principles underlying the generation of behavior in animals and in artificial autonomous agents
Braitenberg Vehicles as Developmental Neurosimulation
The connection between brain and behavior is a longstanding issue in the
areas of behavioral science, artificial intelligence, and neurobiology.
Particularly in artificial intelligence research, behavior is generated by a
black box approximating the brain. As is standard among models of artificial
and biological neural networks, an analogue of the fully mature brain is
presented as a blank slate. This model generates outputs and behaviors from a
priori associations, yet this does not consider the realities of biological
development and developmental learning. Our purpose is to model the development
of an artificial organism that exhibits complex behaviors. We will introduce
our approach, which is to use Braitenberg Vehicles (BVs) to model the
development of an artificial nervous system. The resulting developmental BVs
will generate behaviors that range from stimulus responses to group behavior
that resembles collective motion. Next, we will situate this work in the domain
of artificial brain networks. Then we will focus on broader themes such as
embodied cognition, feedback, and emergence. Our perspective will then be
exemplified by three software instantiations that demonstrate how a BV-genetic
algorithm hybrid model, multisensory Hebbian learning model, and multi-agent
approaches can be used to approach BV development. We introduce use cases such
as optimized spatial cognition (vehicle-genetic algorithm hybrid model), hinges
connecting behavioral and neural models (multisensory Hebbian learning model),
and cumulative classification (multi-agent approaches). In conclusion, we will
revisit concepts related to our approach and how they might guide future
development.Comment: 32 pages, 8 figures, 2 table
- …