264 research outputs found

    Embodied neuromorphic intelligence

    Full text link
    The design of robots that interact autonomously with the environment and exhibit complex behaviours is an open challenge that can benefit from understanding what makes living beings fit to act in the world. Neuromorphic engineering studies neural computational principles to develop technologies that can provide a computing substrate for building compact and low-power processing systems. We discuss why endowing robots with neuromorphic technologies – from perception to motor control – represents a promising approach for the creation of robots which can seamlessly integrate in society. We present initial attempts in this direction, highlight open challenges, and propose actions required to overcome current limitations

    Event-driven visual attention for the humanoid robot iCub.

    Get PDF
    Fast reaction to sudden and potentially interesting stimuli is a crucial feature for safe and reliable interaction with the environment. Here we present a biologically inspired attention system developed for the humanoid robot iCub. It is based on input from unconventional event-driven vision sensors and an efficient computational method. The resulting system shows low-latency and fast determination of the location of the focus of attention. The performance is benchmarked against an instance of the state of the art in robotics artificial attention system used in robotics. Results show that the proposed system is two orders of magnitude faster that the benchmark in selecting a new stimulus to attend

    Towards Real-World Neurorobotics: Integrated Neuromorphic Visual Attention

    Get PDF
    Neural Information Processing: 21st International Conference, ICONIP 2014, Kuching, Malaysia, November 3-6, 2014. Proceedings, Part IIINeuromorphic hardware and cognitive robots seem like an obvious fit, yet progress to date has been frustrated by a lack of tangible progress in achieving useful real-world behaviour. System limitations: the simple and usually proprietary nature of neuromorphic and robotic platforms, have often been the fundamental barrier. Here we present an integration of a mature “neuromimetic” chip, SpiNNaker, with the humanoid iCub robot using a direct AER - address-event representation - interface that overcomes the need for complex proprietary protocols by sending information as UDP-encoded spikes over an Ethernet link. Using an existing neural model devised for visual object selection, we enable the robot to perform a real-world task: fixating attention upon a selected stimulus. Results demonstrate the effectiveness of interface and model in being able to control the robot towards stimulus-specific object selection. Using SpiNNaker as an embeddable neuromorphic device illustrates the importance of two design features in a prospective neurorobot: universal configurability that allows the chip to be conformed to the requirements of the robot rather than the other way ’round, and stan- dard interfaces that eliminate difficult low-level issues of connectors, cabling, signal voltages, and protocols. While this study is only a building block towards that goal, the iCub-SpiNNaker system demonstrates a path towards meaningful behaviour in robots controlled by neural network chips

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    An On-chip Spiking Neural Network for Estimation of the Head Pose of the iCub Robot

    Get PDF
    In this work, we present a neuromorphic architecture for head pose estimation and scene representation for the humanoid iCub robot. The spiking neuronal network is fully realized in Intel's neuromorphic research chip, Loihi, and precisely integrates the issued motor commands to estimate the iCub's head pose in a neuronal path-integration process. The neuromorphic vision system of the iCub is used to correct for drift in the pose estimation. Positions of objects in front of the robot are memorized using on-chip synaptic plasticity. We present real-time robotic experiments using 2 degrees of freedom (DoF) of the robot's head and show precise path integration, visual reset, and object position learning on-chip. We discuss the requirements for integrating the robotic system and neuromorphic hardware with current technologies

    CPU-less robotics: distributed control of biomorphs

    Get PDF
    Traditional robotics revolves around the microprocessor. All well-known demonstrations of sensory guided motor control, such as jugglers and mobile robots, require at least one CPU. Recently, the availability of fast CPUs have made real-time sensory-motor control possible, however, problems with high power consumption and lack of autonomy still remain. In fact, the best examples of real-time robotics are usually tethered or require large batteries. We present a new paradigm for robotics control that uses no explicit CPU. We use computational sensors that are directly interfaced with adaptive actuation units. The units perform motor control and have learning capabilities. This architecture distributes computation over the entire body of the robot, in every sensor and actuator. Clearly, this is similar to biological sensory- motor systems. Some researchers have tried to model the latter in software, again using CPUs. We demonstrate this idea in with an adaptive locomotion controller chip. The locomotory controller for walking, running, swimming and flying animals is based on a Central Pattern Generator (CPG). CPGs are modeled as systems of coupled non-linear oscillators that control muscles responsible for movement. Here we describe an adaptive CPG model, implemented in a custom VLSI chip, which is used to control an under-actuated and asymmetric robotic leg

    Artificial Bio-inspired Tactile Receptive Fields for Edge Orientation Classification

    Get PDF
    Robots and users of hand prosthesis could easily manipulate objects if endowed with the sense of touch. Towards this goal, information about touched objects and surfaces has to be inferred from raw data coming from the sensors. An important cue for objects discrimination is the orientation of edges, that is used both in artificial vision and touch as pre-processing stage. We present a spiking neural network, inspired on the encoding of edges in human first order tactile afferents. The network uses three layers of Leaky Integrate and Fire neurons to distinguish different edge orientations of a bar pressed on the artificial skin of the iCub robot. The architecture is successfully able to discriminate eight different orientations (from 0o to 180o), by implementing a structured model of overlapping receptive fields. We demonstrate that the network can learn the appropriate connectivity through unsupervised spike based learning, and that the number and spatial distribution of sensitive areas within the receptive fields are important in edge orientation discrimination
    • 

    corecore