17 research outputs found

    CPU-less robotics: distributed control of biomorphs

    Get PDF
    Traditional robotics revolves around the microprocessor. All well-known demonstrations of sensory guided motor control, such as jugglers and mobile robots, require at least one CPU. Recently, the availability of fast CPUs have made real-time sensory-motor control possible, however, problems with high power consumption and lack of autonomy still remain. In fact, the best examples of real-time robotics are usually tethered or require large batteries. We present a new paradigm for robotics control that uses no explicit CPU. We use computational sensors that are directly interfaced with adaptive actuation units. The units perform motor control and have learning capabilities. This architecture distributes computation over the entire body of the robot, in every sensor and actuator. Clearly, this is similar to biological sensory- motor systems. Some researchers have tried to model the latter in software, again using CPUs. We demonstrate this idea in with an adaptive locomotion controller chip. The locomotory controller for walking, running, swimming and flying animals is based on a Central Pattern Generator (CPG). CPGs are modeled as systems of coupled non-linear oscillators that control muscles responsible for movement. Here we describe an adaptive CPG model, implemented in a custom VLSI chip, which is used to control an under-actuated and asymmetric robotic leg

    CPU-less robotics: distributed control of biomorphs

    Get PDF
    Traditional robotics revolves around the microprocessor. All well-known demonstrations of sensory guided motor control, such as jugglers and mobile robots, require at least one CPU. Recently, the availability of fast CPUs have made real-time sensory-motor control possible, however, problems with high power consumption and lack of autonomy still remain. In fact, the best examples of real-time robotics are usually tethered or require large batteries. We present a new paradigm for robotics control that uses no explicit CPU. We use computational sensors that are directly interfaced with adaptive actuation units. The units perform motor control and have learning capabilities. This architecture distributes computation over the entire body of the robot, in every sensor and actuator. Clearly, this is similar to biological sensory- motor systems. Some researchers have tried to model the latter in software, again using CPUs. We demonstrate this idea in with an adaptive locomotion controller chip. The locomotory controller for walking, running, swimming and flying animals is based on a Central Pattern Generator (CPG). CPGs are modeled as systems of coupled non-linear oscillators that control muscles responsible for movement. Here we describe an adaptive CPG model, implemented in a custom VLSI chip, which is used to control an under-actuated and asymmetric robotic leg

    Controlling underwater robots with electronic nervous systems

    Get PDF
    We are developing robot controllers based on biomimetic design principles. The goal is to realise the adaptive capabilities of the animal models in natural environments. We report feasibility studies of a hybrid architecture that instantiates a command and coordinating level with computed discrete-time map-based (DTM) neuronal networks and the central pattern generators with analogue VLSI (Very Large Scale Integration) electronic neuron (aVLSI) networks. DTM networks are realised using neurons based on a 1-D or 2-D Map with two additional parameters that define silent, spiking and bursting regimes. Electronic neurons (ENs) based on Hindmarsh-Rose (HR) dynamics can be instantiated in analogue VLSI and exhibit similar behaviour to those based on discrete components. We have constructed locomotor central pattern generators (CPGs) with aVLSI networks that can be modulated to select different behaviours on the basis of selective command input. The two technologies can be fused by interfacing the signals from the DTM circuits directly to the aVLSI CPGs. Using DTMs, we have been able to simulate complex sensory fusion for rheotaxic behaviour based on both hydrodynamic and optical flow senses. We will illustrate aspects of controllers for ambulatory biomimetic robots. These studies indicate that it is feasible to fabricate an electronic nervous system controller integrating both aVLSI CPGs and layered DTM exteroceptive reflexes

    A Novel Analog CMOS Cellular Neural Network for Biologically-Inspired Walking Robot

    Get PDF
    Abstract-We propose a novel analog CMOS circuit that implements a class of cellular neural networks (CNNs) for biologically-inspired walking robots. Recently, a class of autonomous CNNs, so-called a reaction-diffusion (RD) CNN, has applied to locomotion control in robotics. We have introduced a novel RD-CNN, and implemented it as an analog CMOS circuit by using multiple-input floating-gate (MIFG) MOS FETs. As a result, the circuit can operate in voltage-mode. From the results on computer simulations, we have shown that the circuit has capability to generate stable rhythmic patterns for locomotion control in a quadruped walking robot

    Evolutionary Bits'n'Spikes

    Get PDF
    We describe a model and implementation of evolutionary spiking neurons for embedded microcontrollers with few bytes of memory and very low power consumption. The approach is tested with an autonomous microrobot of less than 1 in^3 that evolves the ability to move in a small maze without human intervention and external computers. Considering the very large diffusion, small size, and low cost of embedded microcontrollers, the approach described here could find its way in several intelligent devices with sensors and/or actuators, as well as in smart credit cards

    Evolution of Spiking Neural Controllers for Autonomous Vision-based Robots

    Get PDF
    We describe a set of preliminary experiments to evolve spiking neural controllers for a vision-based mobile robot. All the evolutionary experiments are carried out on physical robots without human intervention. After discussing how to implement and interface these neurons with a physical robot, we show that evolution finds relatively quickly functional spiking controllers capable of navigating in irregularly textured environments without hitting obstacles using a very simple genetic encoding and fitness function. Neuroethological analysis of the network activity let us understand the functioning of evolved controllers and tell the relative importance of single neurons independently of their observed firing rate. Finally, a number of systematic lesion experiments indicate that evolved spiking controllers are very robust to synaptic strength decay that typically occurs in hardware implementations of spiking circuits

    The Development of Bio-Inspired Cortical Feature Maps for Robot Sensorimotor Controllers

    Get PDF
    Full version unavailable due to 3rd party copyright restrictions.This project applies principles from the field of Computational Neuroscience to Robotics research, in particular to develop systems inspired by how nature manages to solve sensorimotor coordination tasks. The overall aim has been to build a self-organising sensorimotor system using biologically inspired techniques based upon human cortical development which can in the future be implemented in neuromorphic hardware. This can then deliver the benefits of low power consumption and real time operation but with flexible learning onboard autonomous robots. A core principle is the Self-Organising Feature Map which is based upon the theory of how 2D maps develop in real cortex to represent complex information from the environment. A framework for developing feature maps for both motor and visual directional selectivity representing eight different directions of motion is described as well as how they can be coupled together to make a basic visuomotor system. In contrast to many previous works which use artificially generated visual inputs (for example, image sequences of oriented moving bars or mathematically generated Gaussian bars) a novel feature of the current work is that the visual input is generated by a DVS 128 silicon retina camera which is a neuromorphic device and produces spike events in a frame-free way. One of the main contributions of this work has been to develop a method of autonomous regulation of the map development process which adapts the learning dependent upon input activity. The main results show that distinct directionally selective maps for both the motor and visual modalities are produced under a range of experimental scenarios. The adaptive learning process successfully controls the rate of learning in both motor and visual map development and is used to indicate when sufficient patterns have been presented, thus avoiding the need to define in advance the quantity and range of training data. The coupling training experiments show that the visual input learns to modulate the original motor map response, creating a new visual-motor topological map.EPSRC, University of Plymouth Graduate Schoo
    corecore