2,002 research outputs found

    Neuro-inspired system for real-time vision sensor tilt correction

    Get PDF
    Neuromorphic engineering tries to mimic biological information processing. Address-Event-Representation (AER) is an asynchronous protocol for transferring the information of spiking neuro-inspired systems. Currently AER systems are able sense visual and auditory stimulus, to process information, to learn, to control robots, etc. In this paper we present an AER based layer able to correct in real time the tilt of an AER vision sensor, using a high speed algorithmic mapping layer. A codesign platform (the AER-Robot platform), with a Xilinx Spartan 3 FPGA and an 8051 USB microcontroller, has been used to implement the system. Testing it with the help of the USBAERmini2 board and the jAER software.Junta de Andalucía P06-TIC-01417Ministerio de Educación y Ciencia TEC2006-11730-C03-02Ministerio de Ciencia e Innovación TEC2009-10639-C04-0

    Live demonstration: Neuro-inspired system for realtime vision tilt correction

    Get PDF
    Correcting digital images tilt needs huge quantities of memory, high computational resources, and use to take a considerable amount of time. This demonstration shows how a spikes-based silicon retina dynamic vision sensor (DVS) tilt can corrected in real time using a commercial accelerometer. DVS output is a stream of spikes codified using the address-event representation (AER). Event-based processing is focused on change in real time DVS output addresses. Taking into account this DVS feature, we present an AER based layer able to correct in real time the DVS tilt, using a high speed algorithmic mapping layer and introducing a minimum latency in the system. A co-design platform (the AER-Robot platform), based into a Xilinx Spartan 3 FPGA and an 8051 USB microcontroller, has been used to implement the system

    Live Demonstration: On the distance estimation of moving targets with a Stereo-Vision AER system

    Get PDF
    Distance calculation is always one of the most important goals in a digital stereoscopic vision system. In an AER system this goal is very important too, but it cannot be calculated as accurately as we would like. This demonstration shows a first approximation in this field, using a disparity algorithm between both retinas. The system can make a distance approach about a moving object, more specifically, a qualitative estimation. Taking into account the stereo vision system features, the previous retina positioning and the very important Hold&Fire building block, we are able to make a correlation between the spike rate of the disparity and the distance.Ministerio de Ciencia e Innovación TEC2009-10639-C04-0

    Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO

    Get PDF
    New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid.The research leading to these results has received funding from the RoboCity2030-III-CM project (Robótica aplicada a la mejora de la calidad de vida de los ciudadanos. fase III; S2013/MIT-2748), funded by Programas de Actividades I + D en la Comunidad de Madrid and cofunded by Structural Funds of the EU

    Estimación de distancias mediante un sistema de estéreo-visión basado en retinas DVS

    Get PDF
    La estimación de distancias es uno de los objetivos más importantes en todo sistema de visión artificial. Para poder llevarse a cabo, es necesaria la presencia de más de un sensor de visión para poder enfocar los objetos desde más de un punto de vista y poder aplicar la geometría de la escena con tal fin. El uso de sensores DVS supone una diferencia notable, puesto que la información recibida hace referencia únicamente a los objetos que se encuentren en movimiento dentro de la escena. Este aspecto y la codificación de la información utilizada hace necesario el uso de un sistema de procesamiento especializado que, en busca de la autonomía y la paralelización, se integra en una FGPA. Esta demostración integra un escenario fijo, donde un objeto móvil realiza un movimiento continuo acercándose y alejándose del sistema de visión estéreo; tras el procesamiento de esta información, se aporta una estimación cualitativa de la posición del objeto.Image processing in digital computer systems usually considers the visual information as a sequence of frames. Digital video processing has to process each frame in order to obtain a result or detect a feature. In stereo vision, existing algorithms used for distance estimation use frames from two digital cameras and process them pixel by pixel to obtain similarities and differences from both frames; after that, it is calculated an estimation about the distance of the different objects of the scene. Spike-based processing implements the processing by manipulating spikes one by one at the time they are transmitted, like human brain. The mammal nervous system is able to solve much more complex problems, such as visual recognition by manipulating neuron’s spikes. The spike-based philosophy for visual information processing based on the neuro-inspired Address-Event- Representation (AER) is achieving nowadays very high performances. In this work, it is proposed a two-DVS-retina connected to a Virtex5 FPGA framework, which allows us to obtain a distance approach of the moving objects in a close environment. It is also proposed a Multi Hold&Fire algorithm in VHDL that obtains the differences between the two retina output streams of spikes; and a VHDL distance estimator.Plan Propio de la Universidad de Sevilla Proyecto: 2017/00000962Ministerio de Industria, Competitividad e Innovación (España) COFNET TEC2016-77785-

    Contributions to improve the technologies supporting unmanned aircraft operations

    Get PDF
    Mención Internacional en el título de doctorUnmanned Aerial Vehicles (UAVs), in their smaller versions known as drones, are becoming increasingly important in today's societies. The systems that make them up present a multitude of challenges, of which error can be considered the common denominator. The perception of the environment is measured by sensors that have errors, the models that interpret the information and/or define behaviors are approximations of the world and therefore also have errors. Explaining error allows extending the limits of deterministic models to address real-world problems. The performance of the technologies embedded in drones depends on our ability to understand, model, and control the error of the systems that integrate them, as well as new technologies that may emerge. Flight controllers integrate various subsystems that are generally dependent on other systems. One example is the guidance systems. These systems provide the engine's propulsion controller with the necessary information to accomplish a desired mission. For this purpose, the flight controller is made up of a control law for the guidance system that reacts to the information perceived by the perception and navigation systems. The error of any of the subsystems propagates through the ecosystem of the controller, so the study of each of them is essential. On the other hand, among the strategies for error control are state-space estimators, where the Kalman filter has been a great ally of engineers since its appearance in the 1960s. Kalman filters are at the heart of information fusion systems, minimizing the error covariance of the system and allowing the measured states to be filtered and estimated in the absence of observations. State Space Models (SSM) are developed based on a set of hypotheses for modeling the world. Among the assumptions are that the models of the world must be linear, Markovian, and that the error of their models must be Gaussian. In general, systems are not linear, so linearization are performed on models that are already approximations of the world. In other cases, the noise to be controlled is not Gaussian, but it is approximated to that distribution in order to be able to deal with it. On the other hand, many systems are not Markovian, i.e., their states do not depend only on the previous state, but there are other dependencies that state space models cannot handle. This thesis deals a collection of studies in which error is formulated and reduced. First, the error in a computer vision-based precision landing system is studied, then estimation and filtering problems from the deep learning approach are addressed. Finally, classification concepts with deep learning over trajectories are studied. The first case of the collection xviiistudies the consequences of error propagation in a machine vision-based precision landing system. This paper proposes a set of strategies to reduce the impact on the guidance system, and ultimately reduce the error. The next two studies approach the estimation and filtering problem from the deep learning approach, where error is a function to be minimized by learning. The last case of the collection deals with a trajectory classification problem with real data. This work completes the two main fields in deep learning, regression and classification, where the error is considered as a probability function of class membership.Los vehículos aéreos no tripulados (UAV) en sus versiones de pequeño tamaño conocidos como drones, van tomando protagonismo en las sociedades actuales. Los sistemas que los componen presentan multitud de retos entre los cuales el error se puede considerar como el denominador común. La percepción del entorno se mide mediante sensores que tienen error, los modelos que interpretan la información y/o definen comportamientos son aproximaciones del mundo y por consiguiente también presentan error. Explicar el error permite extender los límites de los modelos deterministas para abordar problemas del mundo real. El rendimiento de las tecnologías embarcadas en los drones, dependen de nuestra capacidad de comprender, modelar y controlar el error de los sistemas que los integran, así como de las nuevas tecnologías que puedan surgir. Los controladores de vuelo integran diferentes subsistemas los cuales generalmente son dependientes de otros sistemas. Un caso de esta situación son los sistemas de guiado. Estos sistemas son los encargados de proporcionar al controlador de los motores información necesaria para cumplir con una misión deseada. Para ello se componen de una ley de control de guiado que reacciona a la información percibida por los sistemas de percepción y navegación. El error de cualquiera de estos sistemas se propaga por el ecosistema del controlador siendo vital su estudio. Por otro lado, entre las estrategias para abordar el control del error se encuentran los estimadores en espacios de estados, donde el filtro de Kalman desde su aparición en los años 60, ha sido y continúa siendo un gran aliado para los ingenieros. Los filtros de Kalman son el corazón de los sistemas de fusión de información, los cuales minimizan la covarianza del error del sistema, permitiendo filtrar los estados medidos y estimarlos cuando no se tienen observaciones. Los modelos de espacios de estados se desarrollan en base a un conjunto de hipótesis para modelar el mundo. Entre las hipótesis se encuentra que los modelos del mundo han de ser lineales, markovianos y que el error de sus modelos ha de ser gaussiano. Generalmente los sistemas no son lineales por lo que se realizan linealizaciones sobre modelos que a su vez ya son aproximaciones del mundo. En otros casos el ruido que se desea controlar no es gaussiano, pero se aproxima a esta distribución para poder abordarlo. Por otro lado, multitud de sistemas no son markovianos, es decir, sus estados no solo dependen del estado anterior, sino que existen otras dependencias que los modelos de espacio de estados no son capaces de abordar. Esta tesis aborda un compendio de estudios sobre los que se formula y reduce el error. En primer lugar, se estudia el error en un sistema de aterrizaje de precisión basado en visión por computador. Después se plantean problemas de estimación y filtrado desde la aproximación del aprendizaje profundo. Por último, se estudian los conceptos de clasificación con aprendizaje profundo sobre trayectorias. El primer caso del compendio estudia las consecuencias de la propagación del error de un sistema de aterrizaje de precisión basado en visión artificial. En este trabajo se propone un conjunto de estrategias para reducir el impacto sobre el sistema de guiado, y en última instancia reducir el error. Los siguientes dos estudios abordan el problema de estimación y filtrado desde la perspectiva del aprendizaje profundo, donde el error es una función que minimizar mediante aprendizaje. El último caso del compendio aborda un problema de clasificación de trayectorias con datos reales. Con este trabajo se completan los dos campos principales en aprendizaje profundo, regresión y clasificación, donde se plantea el error como una función de probabilidad de pertenencia a una clase.I would like to thank the Ministry of Science and Innovation for granting me the funding with reference PRE2018-086793, associated to the project TEC2017-88048-C2-2-R, which provide me the opportunity to carry out all my PhD. activities, including completing an international research internship.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Antonio Berlanga de Jesús.- Secretario: Daniel Arias Medina.- Vocal: Alejandro Martínez Cav

    INTELLIGENT VISION-BASED NAVIGATION SYSTEM

    Get PDF
    This thesis presents a complete vision-based navigation system that can plan and follow an obstacle-avoiding path to a desired destination on the basis of an internal map updated with information gathered from its visual sensor. For vision-based self-localization, the system uses new floor-edges-specific filters for detecting floor edges and their pose, a new algorithm for determining the orientation of the robot, and a new procedure for selecting the initial positions in the self-localization procedure. Self-localization is based on matching visually detected features with those stored in a prior map. For planning, the system demonstrates for the first time a real-world application of the neural-resistive grid method to robot navigation. The neural-resistive grid is modified with a new connectivity scheme that allows the representation of the collision-free space of a robot with finite dimensions via divergent connections between the spatial memory layer and the neuro-resistive grid layer. A new control system is proposed. It uses a Smith Predictor architecture that has been modified for navigation applications and for intermittent delayed feedback typical of artificial vision. A receding horizon control strategy is implemented using Normalised Radial Basis Function nets as path encoders, to ensure continuous motion during the delay between measurements. The system is tested in a simplified environment where an obstacle placed anywhere is detected visually and is integrated in the path planning process. The results show the validity of the control concept and the crucial importance of a robust vision-based self-localization process

    Digital desing for neuroporphic bio-inspired vision processing.

    Get PDF
    Artificial Intelligence (AI) is an exciting technology that flourished in this century. One of the goals for this technology is to give learning ability to computers. Currently, machine intelligence surpasses human intelligence in specific domains. Besides some conventional machine learning algorithms, Artificial Neural Networks (ANNs) is arguably the most exciting technology that is used to bring this intelligence to the computer world. Due to ANN’s advanced performance, increasing number of applications that need kind of intelligence are using ANN. Neuromorphic engineers are trying to introduce bio-inspired hardware for efficient implementation of neural networks. This hardware should be able to simulate a vast number of neurons in real-time with complex synaptic connectivity while consuming little power. The work that has been done in this thesis is hardware oriented, so it is necessary for the reader to have a good understanding of the hardware that is used for developments in this thesis. In this chapter, we provide a brief overview of the hardware platforms that are used in this thesis. Afterward, we explain briefly the contributions of this thesis to the bio-inspired processing research line
    corecore