22 research outputs found

    Biologically inspired computational structures and processes for autonomous agents and robots

    Get PDF
    Recent years have seen a proliferation of intelligent agent applications: from robots for space exploration to software agents for information filtering and electronic commerce on the Internet. Although the scope of these agent applications have blossomed tremendously since the advent of compact, affordable computing (and the recent emergence of the World Wide Web), the design of such agents for specific applications remains a daunting engineering problem;Rather than approach the design of artificial agents from a purely engineering standpoint, this dissertation views animals as biological agents, and considers artificial analogs of biological structures and processes in the design of effective agent behaviors. In particular, it explores behaviors generated by artificial neural structures appropriately shaped by the processes of evolution and spatial learning;The first part of this dissertation deals with the evolution of artificial neural controllers for a box-pushing robot task. We show that evolution discovers high fitness structures using little domain-specific knowledge, even in feedback-impoverished environments. Through a careful analysis of the evolved designs we also show how evolution exploits the environmental constraints and properties to produce designs of superior adaptive value. By modifying the task constraints in controlled ways, we also show the ability of evolution to quickly adapt to these changes and exploit them to obtain significant performance gains. We also use evolution to design the sensory systems of the box-pushing robots, particularly the number, placement, and ranges of their sensors. We find that evolution automatically discards unnecessary sensors retaining only the ones that appear to significantly affect the performance of the robot. This optimization of design across multiple dimensions (performance, number of sensors, size of neural controller, etc.) is implicitly achieved by the evolutionary algorithm without any external pressure (e.g., penalty on the use of more sensors or neurocontroller units). When used in the design of robots with limited battery capacities , evolution produces energy-efficient robot designs that use minimal numbers of components and yet perform reasonably well. The performance as well as the complexity of robot designs increase when the robots have access to a spatial learning mechanism that allows them to learn, remember, and navigate to power sources in the environment;The second part of this dissertation develops a computational characterization of the hippocampal formation which is known to play a significant role in animal spatial learning. The model is based on neuroscientific and behavioral data, and learns place maps based on interactions of sensory and dead-reckoning information streams. Using an estimation mechanism known as Kalman filtering, the model explicitly deals with uncertainties in the two information streams, allowing the robot to effectively learn and localize even in the presence sensing and motion errors. Additionally, the model has mechanisms to handle perceptual aliasing problems (where multiple places in the environment appear sensorily identical), incrementally learn and integrate local place maps, and learn and remember multiple goal locations in the environment. We show a number of properties of this spatial learning model including computational replication of several behavioral experiments performed with rodents. Not only does this model make significant contributions to robot localization, but also offers a number of predictions and suggestions that can be validated (or refuted) through systematic neurobiological and behavioral experiments with animals

    Simultaneous incremental neuroevolution of motor control, navigation and object manipulation in 3D virtual creatures

    Get PDF
    There have been numerous attempts to develop 3D virtual agents by applying evolutionary processes to populations that exist in a realistic physical simulation. Whilst often contributing useful knowledge, no previous work has demonstrated the capacity to evolve a sequence of increasingly complex behaviours in a single, unified system. This thesis has this demonstration as its primary aim. A rigorous exploration of one aspect of incremental artificial evolution was carried out to understand how subtask presentations affect the whole-task generalisation performance of evolved, fixed-morphology 3D agents. Results from this work led to the design of an environment–body–control architecture that can be used as a base for evolving multiple behaviours incrementally. A simulation based on this architecture with a more complex environment was then developed and explored. This system was then adapted to include elements of physical manipulation as a first step toward a fully physical virtual creature environment demonstrating advanced evolved behaviours. The thesis demonstrates that incremental evolutionary systems can be subject to problems of forgetting and loss of gradient, and that different complexification strategies have a strong bearing on the management of these issues. Presenting successive generations of the population to a full range of objective functions (covering and revisiting the range of complexity) outperforms straightforward linear or direct presentations, establishing a more robust approach to the evolution of naturalistic embodied agents. When combining this approach with a bespoke control architecture in a problem requiring reactive and deliberative behaviours, we see results that not only demonstrate success at the tasks, but also show a variety of intricate behaviours being used. This is the first ever example of the simultaneous incremental evolution in 3D of composite behaviours more complex than simple locomotion. Finally, the architecture demonstrably supports extension to manipulation in a feedback control task. Given the problem-agnostic controller architecture, these results indicate a system with potential for discovering yet more advanced behaviours in yet more complex environments

    A Survey of Robotics Control Based on Learning-Inspired Spiking Neural Networks

    Get PDF
    Biological intelligence processes information using impulses or spikes, which makes those living creatures able to perceive and act in the real world exceptionally well and outperform state-of-the-art robots in almost every aspect of life. To make up the deficit, emerging hardware technologies and software knowledge in the fields of neuroscience, electronics, and computer science have made it possible to design biologically realistic robots controlled by spiking neural networks (SNNs), inspired by the mechanism of brains. However, a comprehensive review on controlling robots based on SNNs is still missing. In this paper, we survey the developments of the past decade in the field of spiking neural networks for control tasks, with particular focus on the fast emerging robotics-related applications. We first highlight the primary impetuses of SNN-based robotics tasks in terms of speed, energy efficiency, and computation capabilities. We then classify those SNN-based robotic applications according to different learning rules and explicate those learning rules with their corresponding robotic applications. We also briefly present some existing platforms that offer an interaction between SNNs and robotics simulations for exploration and exploitation. Finally, we conclude our survey with a forecast of future challenges and some associated potential research topics in terms of controlling robots based on SNNs

    On microelectronic self-learning cognitive chip systems

    Get PDF
    After a brief review of machine learning techniques and applications, this Ph.D. thesis examines several approaches for implementing machine learning architectures and algorithms into hardware within our laboratory. From this interdisciplinary background support, we have motivations for novel approaches that we intend to follow as an objective of innovative hardware implementations of dynamically self-reconfigurable logic for enhanced self-adaptive, self-(re)organizing and eventually self-assembling machine learning systems, while developing this new particular area of research. And after reviewing some relevant background of robotic control methods followed by most recent advanced cognitive controllers, this Ph.D. thesis suggests that amongst many well-known ways of designing operational technologies, the design methodologies of those leading-edge high-tech devices such as cognitive chips that may well lead to intelligent machines exhibiting conscious phenomena should crucially be restricted to extremely well defined constraints. Roboticists also need those as specifications to help decide upfront on otherwise infinitely free hardware/software design details. In addition and most importantly, we propose these specifications as methodological guidelines tightly related to ethics and the nowadays well-identified workings of the human body and of its psyche

    Multiagent Learning Through Indirect Encoding

    Get PDF
    Designing a system of multiple, heterogeneous agents that cooperate to achieve a common goal is a difficult task, but it is also a common real-world problem. Multiagent learning addresses this problem by training the team to cooperate through a learning algorithm. However, most traditional approaches treat multiagent learning as a combination of multiple single-agent learning problems. This perspective leads to many inefficiencies in learning such as the problem of reinvention, whereby fundamental skills and policies that all agents should possess must be rediscovered independently for each team member. For example, in soccer, all the players know how to pass and kick the ball, but a traditional algorithm has no way to share such vital information because it has no way to relate the policies of agents to each other. In this dissertation a new approach to multiagent learning that seeks to address these issues is presented. This approach, called multiagent HyperNEAT, represents teams as a pattern of policies rather than individual agents. The main idea is that an agent’s location within a canonical team layout (such as a soccer team at the start of a game) tends to dictate its role within that team, called the policy geometry. For example, as soccer positions move from goal to center they become more offensive and less defensive, a concept that is compactly represented as a pattern. iii The first major contribution of this dissertation is a new method for evolving neural network controllers called HyperNEAT, which forms the foundation of the second contribution and primary focus of this work, multiagent HyperNEAT. Multiagent learning in this dissertation is investigated in predator-prey, room-clearing, and patrol domains, providing a real-world context for the approach. Interestingly, because the teams in multiagent HyperNEAT are represented as patterns they can scale up to an infinite number of multiagent policies that can be sampled from the policy geometry as needed. Thus the third contribution is a method for teams trained with multiagent HyperNEAT to dynamically scale their size without further learning. Fourth, the capabilities to both learn and scale in multiagent HyperNEAT are compared to the traditional multiagent SARSA(λ) approach in a comprehensive study. The fifth contribution is a method for efficiently learning and encoding multiple policies for each agent on a team to facilitate learning in multi-task domains. Finally, because there is significant interest in practical applications of multiagent learning, multiagent HyperNEAT is tested in a real-world military patrolling application with actual Khepera III robots. The ultimate goal is to provide a new perspective on multiagent learning and to demonstrate the practical benefits of training heterogeneous, scalable multiagent teams through generative encoding

    The 1995 Goddard Conference on Space Applications of Artificial Intelligence and Emerging Information Technologies

    Get PDF
    This publication comprises the papers presented at the 1995 Goddard Conference on Space Applications of Artificial Intelligence and Emerging Information Technologies held at the NASA/Goddard Space Flight Center, Greenbelt, Maryland, on May 9-11, 1995. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed

    Evolutionary, developmental neural networks for robust robotic control

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 136-143).The use of artificial evolution to synthesize controllers for physical robots is still in its infancy. Most applications are on very simple robots in artificial environments, and even these examples struggle to span the "reality gap," a name given to the difference between the performance of a simulated robot and the performance of a.real robot using the same evolved controller. This dissertation describes three methods for improving the use of artificial evolution as a tool for generating controllers for physical robots. First, the evolutionary process must incorporate testing on the physical robot. Second, repeated structure on the robot should be exploited. Finally, prior knowledge about the robot and task should be meaningfully incorporated. The impact of these three methods, both in simulation and on physical robots, is demonstrated, quantified, and compared to hand-designed controllers.by Bryan Adams.Ph.D
    corecore