823 research outputs found

    Integrated Circuitry to Detect Slippage Inspired by Human Skin and Artificial Retinas

    Get PDF
    This paper presents a bioinspired integrated tactile coprocessor that is able to generate a warning in the case of slippage via the data provided by a tactile sensor. Some implementations use different layers of piezoresistive and piezoelectric materials to build upon the raw sensor and obtain the static (pressure) as well as the dynamic (slippage) information. In this paper, a simple raw sensor is used, and a circuitry is implemented, which is able to extract the dynamic information from a single piezoresistive layer. The circuitry was inspired by structures found in human skin and retina, as they are biological systems made up of a dense network of receptors. It is largely based on an artificial retina , which is able to detect motion by using relatively simple spatial temporal dynamics. The circuitry was adapted to respond in the bandwidth of microvibrations produced by early slippage, resembling human skin. Experimental measurements from a chip implemented in a 0.35-mum four-metal two-poly standard CMOS process are presented to show both the performance of the building blocks included in each processing node and the operation of the whole system as a detector of early slippage.Ministerio de Economía y Competitividad TEC2006-12376-C02-01Gobierno de España TEC2006- 1572

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    A novel approach to robot vision using a hexagonal grid and spiking neural networks

    Get PDF
    Many robots use range data to obtain an almost 3-dimensional description of their environment. Feature driven segmentation of range images has been primarily used for 3D object recognition, and hence the accuracy of the detected features is a prominent issue. Inspired by the structure and behaviour of the human visual system, we present an approach to feature extraction in range data using spiking neural networks and a biologically plausible hexagonal pixel arrangement. Standard digital images are converted into a hexagonal pixel representation and then processed using a spiking neural network with hexagonal shaped receptive fields; this approach is a step towards developing a robotic eye that closely mimics the human eye. The performance is compared with receptive fields implemented on standard rectangular images. Results illustrate that, using hexagonally shaped receptive fields, performance is improved over standard rectangular shaped receptive fields

    Deep Spiking Neural Network model for time-variant signals classification: a real-time speech recognition approach

    Get PDF
    Speech recognition has become an important task to improve the human-machine interface. Taking into account the limitations of current automatic speech recognition systems, like non-real time cloud-based solutions or power demand, recent interest for neural networks and bio-inspired systems has motivated the implementation of new techniques. Among them, a combination of spiking neural networks and neuromorphic auditory sensors offer an alternative to carry out the human-like speech processing task. In this approach, a spiking convolutional neural network model was implemented, in which the weights of connections were calculated by training a convolutional neural network with specific activation functions, using firing rate-based static images with the spiking information obtained from a neuromorphic cochlea. The system was trained and tested with a large dataset that contains ”left” and ”right” speech commands, achieving 89.90% accuracy. A novel spiking neural network model has been proposed to adapt the network that has been trained with static images to a non-static processing approach, making it possible to classify audio signals and time series in real time.Ministerio de Economía y Competitividad TEC2016-77785-

    A Robust Analog VLSI Reichardt Motion Sensor

    Get PDF
    Silicon imagers with integrated motion-detection circuitry have been developed and tested for the past 15 years. Many previous circuits estimate motion by identifying and tracking spatial or temporal features. These approaches are prone to failure at low SNR conditions, where feature detection becomes unreliable. An alternate approach to motion detection is an intensity-based spatiotemporal correlation algorithm, such as the one proposed by Hassenstein and Reichardt in 1956 to explain aspects of insect vision. We implemented a Reichardt motion sensor with integrated photodetectors in a standard CMOS process. Our circuit operates at sub-microwatt power levels, the lowest reported for any motion sensor. We measure the effects of device mismatch on these parallel, analog circuits to show they are suitable for constructing 2-D VLSI arrays. Traditional correlation-based sensors suffer from strong contrast dependence. We introduce a circuit architecture that lessens this dependence. We also demonstrate robust performance of our sensor to complex stimuli in the presence of spatial and temporal noise

    Engineering derivatives from biological systems for advanced aerospace applications

    Get PDF
    The present study consisted of a literature survey, a survey of researchers, and a workshop on bionics. These tasks produced an extensive annotated bibliography of bionics research (282 citations), a directory of bionics researchers, and a workshop report on specific bionics research topics applicable to space technology. These deliverables are included as Appendix A, Appendix B, and Section 5.0, respectively. To provide organization to this highly interdisciplinary field and to serve as a guide for interested researchers, we have also prepared a taxonomy or classification of the various subelements of natural engineering systems. Finally, we have synthesized the results of the various components of this study into a discussion of the most promising opportunities for accelerated research, seeking solutions which apply engineering principles from natural systems to advanced aerospace problems. A discussion of opportunities within the areas of materials, structures, sensors, information processing, robotics, autonomous systems, life support systems, and aeronautics is given. Following the conclusions are six discipline summaries that highlight the potential benefits of research in these areas for NASA's space technology programs

    Nanotechnology for Humans and Humanoids A vision of the use of nanotechnology in future robotics

    Get PDF
    Humanoids will soon co-exist with humans, helping us at home and at work, assisting elder people, replacing us in dangerous environments and somewhat adding to our personal communication devices the capability to actuate motion. In order for humanoids to be compatible with our everyday tools and our lifestyle it is however mandatory to reproduce (at least partially) the body-mind nexus that makes humans so superior to machines. This requires a totally new approach to humanoid technologies, combining new responsive and soft materials, bioinspired sensors, high efficiency power sources and cognition/intelligence of low computational cost: in other words, an unprecedented merge of nanotechnology, cognition and mechatronics
    corecore