271 research outputs found

    Evolution of recollection and prediction in neural networks

    Get PDF
    Abstract — A large number of neural network models are based on a feedforward topology (perceptrons, backpropagation networks, radial basis functions, support vector machines, etc.), thus lacking dynamics. In such networks, the order of input presentation is meaningless (i.e., it does not affect the behavior) since the behavior is largely reactive. That is, such neural networks can only operate in the present, having no access to the past or the future. However, biological neural networks are mostly constructed with a recurrent topology, and recurrent (artificial) neural network models are able to exhibit rich temporal dynamics, thus time becomes an essential factor in their operation. In this paper, we will investigate the emergence of recollection and prediction in evolving neural networks. First, we will show how reactive, feedforward networks can evolve a memory-like function (recollection) through utilizing external markers dropped and detected in the environment. Second, we will investigate how recurrent networks with more predictable internal state trajectory can emerge as an eventual winner in evolutionary struggle when competing networks with less predictable trajectory show the same level of behavioral performance. We expect our results to help us better understand the evolutionary origin of recollection and prediction in neuronal networks, and better appreciate the role of time in brain function. I

    TacMMs: Tactile Mobile Manipulators for Warehouse Automation

    Full text link
    Multi-robot platforms are playing an increasingly important role in warehouse automation for efficient goods transport. This paper proposes a novel customization of a multi-robot system, called Tactile Mobile Manipulators (TacMMs). Each TacMM integrates a soft optical tactile sensor and a mobile robot with a load-lifting mechanism, enabling cooperative transportation in tasks requiring coordinated physical interaction. More specifically, we mount the TacTip (biomimetic optical tactile sensor) on the Distributed Organisation and Transport System (DOTS) mobile robot. The tactile information then helps the mobile robots adjust the relative robot-object pose, thereby increasing the efficiency of load-lifting tasks. This study compares the performance of using two TacMMs with tactile perception with traditional vision-based pose adjustment for load-lifting. The results show that the average success rate of the TacMMs (66%) is improved over a purely visual-based method (34%), with a larger improvement when the mass of the load was non-uniformly distributed. Although this initial study considers two TacMMs, we expect the benefits of tactile perception to extend to multiple mobile robots. Website: https://sites.google.com/view/tacmmsComment: 8 pages, accepted in IEEE Robotics and Automation Letters, 19 June 202

    Digital control networks for virtual creatures

    Get PDF
    Robot control systems evolved with genetic algorithms traditionally take the form of floating-point neural network models. This thesis proposes that digital control systems, such as quantised neural networks and logical networks, may also be used for the task of robot control. The inspiration for this is the observation that the dynamics of discrete networks may contain cyclic attractors which generate rhythmic behaviour, and that rhythmic behaviour underlies the central pattern generators which drive lowlevel motor activity in the biological world. To investigate this a series of experiments were carried out in a simulated physically realistic 3D world. The performance of evolved controllers was evaluated on two well known control tasks—pole balancing, and locomotion of evolved morphologies. The performance of evolved digital controllers was compared to evolved floating-point neural networks. The results show that the digital implementations are competitive with floating-point designs on both of the benchmark problems. In addition, the first reported evolution from scratch of a biped walker is presented, demonstrating that when all parameters are left open to evolutionary optimisation complex behaviour can result from simple components

    GPU Computing for Cognitive Robotics

    Get PDF
    This thesis presents the first investigation of the impact of GPU computing on cognitive robotics by providing a series of novel experiments in the area of action and language acquisition in humanoid robots and computer vision. Cognitive robotics is concerned with endowing robots with high-level cognitive capabilities to enable the achievement of complex goals in complex environments. Reaching the ultimate goal of developing cognitive robots will require tremendous amounts of computational power, which was until recently provided mostly by standard CPU processors. CPU cores are optimised for serial code execution at the expense of parallel execution, which renders them relatively inefficient when it comes to high-performance computing applications. The ever-increasing market demand for high-performance, real-time 3D graphics has evolved the GPU into a highly parallel, multithreaded, many-core processor extraordinary computational power and very high memory bandwidth. These vast computational resources of modern GPUs can now be used by the most of the cognitive robotics models as they tend to be inherently parallel. Various interesting and insightful cognitive models were developed and addressed important scientific questions concerning action-language acquisition and computer vision. While they have provided us with important scientific insights, their complexity and application has not improved much over the last years. The experimental tasks as well as the scale of these models are often minimised to avoid excessive training times that grow exponentially with the number of neurons and the training data. This impedes further progress and development of complex neurocontrollers that would be able to take the cognitive robotics research a step closer to reaching the ultimate goal of creating intelligent machines. This thesis presents several cases where the application of the GPU computing on cognitive robotics algorithms resulted in the development of large-scale neurocontrollers of previously unseen complexity enabling the conducting of the novel experiments described herein.European Commission Seventh Framework Programm

    A Novel Framework of Online, Task-Independent Cognitive State Transition Detection and Its Applications

    Get PDF
    Complex reach, grasp, and object manipulation tasks require sequential, temporal coordination of a movement plan by neurons in the brain. Detecting cognitive state transitions associated with motor tasks from sequential neural data is pivotal in rehabilitation engineering. The cognitive state detectors proposed thus far rely on task-dependent (TD) models, i.e., the detection strategy exploits a priori knowledge of the movement goals to determine the actual states, regardless of whether these cognitive states actually depend on the movement tasks or not. This approach, however, is not viable when the tasks are not known a priori (e.g., the subject performs many different tasks) or when there is a paucity of neural data for each task. Moreover, some cognitive states (e.g., holding) are invariant to the tasks performs. I first develop an offline, task-dependent cognitive state transition detector and a kinematics decoder to show the feasibility of distinguishing between cognitive states based on their inherent features extracted via a hidden Markov model (HMM) based detection framework. The proposed framework is designed to decode both cognitive states and kinematics from ensemble neural activity. The proposed decoding framework is able to a) automatically differentiate between baseline, plan, and movement, and b) determine novel holding epochs of neural activity and also estimate the epoch-dependent kinematics. Specifically, the framework is mainly composed of a hidden Markov model (HMM) state decoder and a switching linear system (S-LDS) kinematics decoder. I take a supervised approach and use a generative framework of neural activity and kinematics. I demonstrate the decoding framework using neural recordings from ventral premotor (PMv) and dorsal premotor (PMd) neurons of a non-human primate executing four complex reach-to-grasp tasks along with the corresponding kinematics recording. Using the HMM state decoder, I demonstrate that the transitions between neighboring epochs of neural activity, regardless of the existence of any external kinematics changes, can be detected with high accuracy (>85%) and short latencies (<150 ms). I further show that the joint angle kinematics can be estimated reliably with high accuracy (mean = 88%) using a S-LDS kinematics decoder. In addition, I demonstrate that the use of multiple latent state variables to model the within-epoch neural activity variability can improve the decoder performance. This unified decoding framework combining a HMM state decoder and a S-LDS may be useful in neural decoding of cognitive states and complex movements of prosthetic limbs in practical brain-computer interface implementations. I then develop a real-time (online) task-independent (TI) framework to detect cognitive state transitions from spike trains and kinematic measurements. I applied this framework to 226 single-unit recordings collected via multi-electrode arrays in the premotor dorsal and ventral (PMd and PMv) regions of the cortex of two non-human primates performing 3D multi-object reach-to-grasp tasks, and I used the detection latency and accuracy of state transitions to measure the performance. I found that, in both online and offline detection modes, (i) TI models have significantly better performance than TD models when using neuronal data alone, however (ii) during movements, the addition of the kinematics history to the TI models further improves detection performance. These findings suggest that TI models may be able to more accurately detect cognitive state transitions than TD under certain circumstances. The proposed framework could pave the way for a TI control of prosthesis from cortical neurons, a beneficial outcome when the choice of tasks is vast, but despite that the basic movement cognitive states need to be decoded. Based on the online cognitive state transition detector, I further construct an online task-independent kinematics decoder. I constructed our framework using single-unit recordings from 452 neurons and synchronized kinematics recordings from two non-human primates performing 3D multi-object reach-to-grasp tasks. I find that (i) the proposed TI framework performs significantly better than current frameworks that rely on TD models (p = 0.03); and (ii) modeling cognitive state information further improves decoding performance. These findings suggest that TI models with cognitive-state-dependent parameters may more accurately decode kinematics and could pave the way for more clinically viable neural prosthetics

    Final report key contents: main results accomplished by the EU-Funded project IM-CLeVeR - Intrinsically Motivated Cumulative Learning Versatile Robots

    Get PDF
    This document has the goal of presenting the main scientific and technological achievements of the project IM-CLeVeR. The document is organised as follows: 1. Project executive summary: a brief overview of the project vision, objectives and keywords. 2. Beneficiaries of the project and contacts: list of Teams (partners) of the project, Team Leaders and contacts. 3. Project context and objectives: the vision of the project and its overall objectives 4. Overview of work performed and main results achieved: a one page overview of the main results of the project 5. Overview of main results per partner: a bullet-point list of main results per partners 6. Main achievements in detail, per partner: a throughout explanation of the main results per partner (but including collaboration work), with also reference to the main publications supporting them

    Engineering evolutionary control for real-world robotic systems

    Get PDF
    Evolutionary Robotics (ER) is the field of study concerned with the application of evolutionary computation to the design of robotic systems. Two main issues have prevented ER from being applied to real-world tasks, namely scaling to complex tasks and the transfer of control to real-robot systems. Finding solutions to complex tasks is challenging for evolutionary approaches due to the bootstrap problem and deception. When the task goal is too difficult, the evolutionary process will drift in regions of the search space with equally low levels of performance and therefore fail to bootstrap. Furthermore, the search space tends to get rugged (deceptive) as task complexity increases, which can lead to premature convergence. Another prominent issue in ER is the reality gap. Behavioral control is typically evolved in simulation and then only transferred to the real robotic hardware when a good solution has been found. Since simulation is an abstraction of the real world, the accuracy of the robot model and its interactions with the environment is limited. As a result, control evolved in a simulator tends to display a lower performance in reality than in simulation. In this thesis, we present a hierarchical control synthesis approach that enables the use of ER techniques for complex tasks in real robotic hardware by mitigating the bootstrap problem, deception, and the reality gap. We recursively decompose a task into sub-tasks, and synthesize control for each sub-task. The individual behaviors are then composed hierarchically. The possibility of incrementally transferring control as the controller is composed allows transferability issues to be addressed locally in the controller hierarchy. Our approach features hybridity, allowing different control synthesis techniques to be combined. We demonstrate our approach in a series of tasks that go beyond the complexity of tasks where ER has been successfully applied. We further show that hierarchical control can be applied in single-robot systems and in multirobot systems. Given our long-term goal of enabling the application of ER techniques to real-world tasks, we systematically validate our approach in real robotic hardware. For one of the demonstrations in this thesis, we have designed and built a swarm robotic platform, and we show the first successful transfer of evolved and hierarchical control to a swarm of robots outside of controlled laboratory conditions.A Robótica Evolutiva (RE) é a área de investigação que estuda a aplicação de computação evolutiva na conceção de sistemas robóticos. Dois principais desafios têm impedido a aplicação da RE em tarefas do mundo real: a dificuldade em solucionar tarefas complexas e a transferência de controladores evoluídos para sistemas robóticos reais. Encontrar soluções para tarefas complexas é desafiante para as técnicas evolutivas devido ao bootstrap problem e à deception. Quando o objetivo é demasiado difícil, o processo evolutivo tende a permanecer em regiões do espaço de procura com níveis de desempenho igualmente baixos, e consequentemente não consegue inicializar. Por outro lado, o espaço de procura tende a enrugar à medida que a complexidade da tarefa aumenta, o que pode resultar numa convergência prematura. Outro desafio na RE é a reality gap. O controlo robótico é tipicamente evoluído em simulação, e só é transferido para o sistema robótico real quando uma boa solução tiver sido encontrada. Como a simulação é uma abstração da realidade, a precisão do modelo do robô e das suas interações com o ambiente é limitada, podendo resultar em controladores com um menor desempenho no mundo real. Nesta tese, apresentamos uma abordagem de síntese de controlo hierárquica que permite o uso de técnicas de RE em tarefas complexas com hardware robótico real, mitigando o bootstrap problem, a deception e a reality gap. Decompomos recursivamente uma tarefa em sub-tarefas, e sintetizamos controlo para cada subtarefa. Os comportamentos individuais são então compostos hierarquicamente. A possibilidade de transferir o controlo incrementalmente à medida que o controlador é composto permite que problemas de transferibilidade possam ser endereçados localmente na hierarquia do controlador. A nossa abordagem permite o uso de diferentes técnicas de síntese de controlo, resultando em controladores híbridos. Demonstramos a nossa abordagem em várias tarefas que vão para além da complexidade das tarefas onde a RE foi aplicada. Também mostramos que o controlo hierárquico pode ser aplicado em sistemas de um robô ou sistemas multirobô. Dado o nosso objetivo de longo prazo de permitir o uso de técnicas de RE em tarefas no mundo real, concebemos e desenvolvemos uma plataforma de robótica de enxame, e mostramos a primeira transferência de controlo evoluído e hierárquico para um exame de robôs fora de condições controladas de laboratório.This work has been supported by the Portuguese Foundation for Science and Technology (Fundação para a Ciência e Tecnologia) under the grants SFRH/BD/76438/2011, EXPL/EEI-AUT/0329/2013, and by Instituto de Telecomunicações under the grant UID/EEA/50008/2013

    A bottom-up approach to emulating emotions using neuromodulation in agents

    Get PDF
    A bottom-up approach to emulating emotions is expounded in this thesis. This is intended to be useful in research where a phenomenon is to be emulated but the nature of it can not easily be defined. This approach not only advocates emulating the underlying mechanisms that are proposed to give rise to emotion in natural agents, but also advocates applying an open-mind as to what the phenomenon actually is. There is evidence to suggest that neuromodulation is inherently responsible for giving rise to emotions in natural agents and that emotions consequently modulate the behaviour of the agent. The functionality provided by neuromodulation, when applied to agents with self-organising biologically plausible neural networks, is isolated and studied. In research efforts such as this the definition should emerge from the evidence rather than postulate that the definition, derived from limited information, is correct and should be implemented. An implementation of a working definition only tells us that the definition can be implemented. It does not tell us whether that working definition is itself correct and matches the phenomenon in the real world. If this model of emotions was assumed to be true and implemented in an agent, there would be a danger of precluding implementations that could offer alternative theories as to the relevance of neuromodulation to emotions. By isolating and studying different mechanisms such as neuromodulation that are thought to give rise to emotions, theories can arise as to what emotions are and the functionality that they provide. The application of this approach concludes with a theory as to how some emotions can operate via the use of neuromodulators. The theory is explained using the concepts of dynamical systems, free-energy and entropy.EPSRC Stirling University, Computing Science departmental gran
    corecore