5 research outputs found

    Parallel Stereo Matching with FastFlow

    Get PDF
    Durante questa tesi è stato fatto uno studio di alcuni algoritmi di stereo vision allo stato dell'arte, ovvero algoritmi che, in analogia con quanto succede nella visione umana, prendono come input una coppia di immagini della stessa scena prese da punti di vista adiacenti cercando la corrispondenza degli oggetti in queste immagini in modo da determinarne la profondità. Si è studiato un algoritmo in particolare, ADCensus, sia basandosi sul lavoro sequenziale originale, che facendo considerazioni sulla parallelizzabilità data e stream parallel. L'algoritmo è stato poi implementato in C++ in sequenziale e in parallelo e i programmi risultanti sono stati testati su architetture multi (Intel Xeon E5) e many (coprocessore Intel Xeon Phi) core, confrontando i risultati sia con il lavoro originale che con gli algoritmi di stereo vision forniti da OpenCV. Stereo vision algorithms are algorithms that, similarly to what happens the human vision system, take as input pairs of images of the same scene, taken from two adjacent points of view, and compute the correspondence between objects in such images in order to determine their depth. During this thesis a study of state of the art stereo vision algorithms was made, then we chose one particular algorithm, ADCensus, studying the original sequential algorithm and considering data and stream parallel alternatives. The algorithm was then implemented in C++ in both sequential and parallel versions, and the resulting programs were tested on multi (Intel Xeon E5) and many (Intel Xeon Phi coprocessor) core architectures, comparing our results with the original work and with similar algorithms provided by OpenCV

    Running Large-Scale Simulations on the Neurorobotics Platform to Understand Vision – The Case of Visual Crowding

    Get PDF
    Traditionally, human vision research has focused on specific paradigms and proposed models to explain very specific properties of visual perception. However, the complexity and scope of modern psychophysical paradigms undermine the success of this approach. For example, perception of an element strongly deteriorates when neighboring elements are presented in addition (visual crowding). As it was shown recently, the magnitude of deterioration depends not only on the directly neighboring elements but on almost all elements and their specific configuration. Hence, to fully explain human visual perception, one needs to take large parts of the visual field into account and combine all the aspects of vision that become relevant at such scale. These efforts require sophisticated and collaborative modeling. The Neurorobotics Platform (NRP) of the Human Brain Project offers a unique opportunity to connect models of all sorts of visual functions, even those developed by different research groups, into a coherently functioning system. Here, we describe how we used the NRP to connect and simulate a segmentation model, a retina model, and a saliency model to explain complex results about visual perception. The combination of models highlights the versatility of the NRP and provides novel explanations for inward-outward anisotropy in visual crowding

    Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform

    Get PDF
    Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain–body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP).1 At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 604102 (Human Brain Project) and from the European Unions Horizon 2020 Research and Innovation Programme under Grant Agreement No. 720270 (HBP SGA1)

    A visual tracking model implemented on the iCub robot as a use case for a novel neurorobotic toolkit integrating brain and physics simulation

    No full text
    Developing neuro-inspired computing paradigms that mimic nervous system function is an emerging field of research that fosters our model understanding of the biological system and targets technical applications in artificial systems. The computational power of simulated brain circuits makes them a very promising tool for the development for brain-controlled robots. Early phases of robotic controllers development make extensive use of simulators as they are easy, fast and cheap tools. In order to develop robotics controllers that encompass brain models, a tool that include both neural simulation and physics simulation is missing. Such a tool would require the capability of orchestrating and synchronizing both simulations as well as managing the exchange of data between them. The Neurorobotics Platform (NRP) aims at filling this gap through an integrated software toolkit enabling an experimenter to design and execute a virtual experiment with a simulated robot using customized brain models. As a use case for the NRP, the iCub robot has been integrated into the platform and connected to a spiking neural network. In particular, experiments of visual tracking have been conducted in order to demonstrate the potentiality of such a platform
    corecore