18 research outputs found

    TRACKSS approach to improving road safety through sensors collaboration on vehicle and in infrastructure

    Get PDF
    There are various technologies that can be used to improve road safety, but they tend to be self-contained and do not interact much with other technologies. This might provide sufficient service as such, but we believe better systems can be developed if we allow these technologies to collaborate and share information with each other. This paper outlines the work we have carried out within the EU-funded TRACKSS project in order to allow two sensing technologies (near-infrared camera and smart dust) to work together, especially in developing more robust V2V and I2V safety applications

    Cooperative Road Sign and Traffic Light Using Near Infrared Identification and Zigbee Smartdust Technologies

    Get PDF
    Vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I as well as I2V)applications are developing very fast. They rely on telecommunication and localizationtechnologies to detect, identify and geo-localize the sources of information (such as vehicles,roadside objects, or pedestrians). This paper presents an original approach on how twodifferent technologies (a near infrared identification sensor and a Zigbee smartdust sensor)can work together in order to create an improved system. After an introduction of these twosensors, two concrete applications will be presented: a road sign detection application and acooperative traffic light application. These applications show how the coupling of the twosensors enables robust detection and how they complement each other to add dynamicinformation to road-side objects

    Dynamic Event-based Optical Identification and Communication

    Full text link
    Optical identification is often done with spatial or temporal visual pattern recognition and localization. Temporal pattern recognition, depending on the technology, involves a trade-off between communication frequency, range and accurate tracking. We propose a solution with light-emitting beacons that improves this trade-off by exploiting fast event-based cameras and, for tracking, sparse neuromorphic optical flow computed with spiking neurons. The system is embedded in a simulated drone and evaluated in an asset monitoring use case. It is robust to relative movements and enables simultaneous communication with, and tracking of, multiple moving beacons. Finally, in a hardware lab prototype, we demonstrate for the first time beacon tracking performed simultaneously with state-of-the-art frequency communication in the kHz range.Comment: 10 pages, 7 figures and 1 tabl

    Neuromorphic Optical Flow and Real-time Implementation with Event Cameras

    Full text link
    Optical flow provides information on relative motion that is an important component in many computer vision pipelines. Neural networks provide high accuracy optical flow, yet their complexity is often prohibitive for application at the edge or in robots, where efficiency and latency play crucial role. To address this challenge, we build on the latest developments in event-based vision and spiking neural networks. We propose a new network architecture, inspired by Timelens, that improves the state-of-the-art self-supervised optical flow accuracy when operated both in spiking and non-spiking mode. To implement a real-time pipeline with a physical event camera, we propose a methodology for principled model simplification based on activity and latency analysis. We demonstrate high speed optical flow prediction with almost two orders of magnitude reduced complexity while maintaining the accuracy, opening the path for real-time deployments.Comment: Accepted for IEEE CVPRW, Vancouver 2023. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media. Copyright 2023 IEE

    Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform

    Get PDF
    Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain–body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP).1 At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 604102 (Human Brain Project) and from the European Unions Horizon 2020 Research and Innovation Programme under Grant Agreement No. 720270 (HBP SGA1)

    Vehicle Identification Using Near Infrared Vision and Applications to Cooperative Perception

    Get PDF
    International audienceVehicles will be in the next future equipped with V2V telecommunication means to exchange data, such as the presence of an obstacle on the road, or an emergency braking notification. Vehicles are also more and more equipped with perception systems (cameras, laser scanners, radars) that enable them to explore the immediate environment, including other vehicles. We propose in this paper an on-board optical vehicle identification system to enable telecom and perception systems to cooperate. The optical identification identifies which vehicle, in the scene captured by the perception system, is sending information via telecom
    corecore