82,085 research outputs found

    ITERL: A Wireless Adaptive System for Efficient Road Lighting

    Get PDF
    This work presents the development and construction of an adaptive street lighting system that improves safety at intersections, which is the result of applying low-power Internet of Things (IoT) techniques to intelligent transportation systems. A set of wireless sensor nodes using the Institute of Electrical and Electronics Engineers (IEEE) 802.15.4 standard with additional internet protocol (IP) connectivity measures both ambient conditions and vehicle transit. These measurements are sent to a coordinator node that collects and passes them to a local controller, which then makes decisions leading to the streetlight being turned on and its illumination level controlled. Streetlights are autonomous, powered by photovoltaic energy, and wirelessly connected, achieving a high degree of energy efficiency. Relevant data are also sent to the highway conservation center, allowing it to maintain up-to-date information for the system, enabling preventive maintenance.Consejería de Fomento y Vivienda Junta de Andalucía G-GI3002 / IDIOFondo Europeo de Desarrollo Regional G-GI3002 / IDI

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades

    Get PDF
    Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labelling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches.Comment: 10 pages, 6 figures in Frontiers in Neuromorphic Engineering, special topic on Benchmarks and Challenges for Neuromorphic Engineering, 2015 (under review

    Wireless and Physical Security via Embedded Sensor Networks

    Full text link
    Wireless Intrusion Detection Systems (WIDS) monitor 802.11 wireless frames (Layer-2) in an attempt to detect misuse. What distinguishes a WIDS from a traditional Network IDS is the ability to utilize the broadcast nature of the medium to reconstruct the physical location of the offending party, as opposed to its possibly spoofed (MAC addresses) identity in cyber space. Traditional Wireless Network Security Systems are still heavily anchored in the digital plane of "cyber space" and hence cannot be used reliably or effectively to derive the physical identity of an intruder in order to prevent further malicious wireless broadcasts, for example by escorting an intruder off the premises based on physical evidence. In this paper, we argue that Embedded Sensor Networks could be used effectively to bridge the gap between digital and physical security planes, and thus could be leveraged to provide reciprocal benefit to surveillance and security tasks on both planes. Toward that end, we present our recent experience integrating wireless networking security services into the SNBENCH (Sensor Network workBench). The SNBENCH provides an extensible framework that enables the rapid development and automated deployment of Sensor Network applications on a shared, embedded sensing and actuation infrastructure. The SNBENCH's extensible architecture allows an engineer to quickly integrate new sensing and response capabilities into the SNBENCH framework, while high-level languages and compilers allow novice SN programmers to compose SN service logic, unaware of the lower-level implementation details of tools on which their services rely. In this paper we convey the simplicity of the service composition through concrete examples that illustrate the power and potential of Wireless Security Services that span both the physical and digital plane.National Science Foundation (CISE/CSR 0720604, ENG/EFRI 0735974, CIES/CNS 0520166, CNS/ITR 0205294, CISE/ERA RI 0202067

    A Comprehensive Experimental Comparison of Event Driven and Multi-Threaded Sensor Node Operating Systems

    Get PDF
    The capabilities of a sensor network are strongly influenced by the operating system used on the sensor nodes. In general, two different sensor network operating system types are currently considered: event driven and multi-threaded. It is commonly assumed that event driven operating systems are more suited to sensor networks as they use less memory and processing resources. However, if factors other than resource usage are considered important, a multi-threaded system might be preferred. This paper compares the resource needs of multi-threaded and event driven sensor network operating systems. The resources considered are memory usage and power consumption. Additionally, the event handling capabilities of event driven and multi-threaded operating systems are analyzed and compared. The results presented in this paper show that for a number of application areas a thread-based sensor network operating system is feasible and preferable

    An Event-Driven Multi-Kernel Convolution Processor Module for Event-Driven Vision Sensors

    Get PDF
    Event-Driven vision sensing is a new way of sensing visual reality in a frame-free manner. This is, the vision sensor (camera) is not capturing a sequence of still frames, as in conventional video and computer vision systems. In Event-Driven sensors each pixel autonomously and asynchronously decides when to send its address out. This way, the sensor output is a continuous stream of address events representing reality dynamically continuously and without constraining to frames. In this paper we present an Event-Driven Convolution Module for computing 2D convolutions on such event streams. The Convolution Module has been designed to assemble many of them for building modular and hierarchical Convolutional Neural Networks for robust shape and pose invariant object recognition. The Convolution Module has multi-kernel capability. This is, it will select the convolution kernel depending on the origin of the event. A proof-of-concept test prototype has been fabricated in a 0.35 m CMOS process and extensive experimental results are provided. The Convolution Processor has also been combined with an Event-Driven Dynamic Vision Sensor (DVS) for high-speed recognition examples. The chip can discriminate propellers rotating at 2 k revolutions per second, detect symbols on a 52 card deck when browsing all cards in 410 ms, or detect and follow the center of a phosphor oscilloscope trace rotating at 5 KHz.Unión Europea 216777 (NABAB)Ministerio de Ciencia e Innovación TEC2009-10639-C04-0
    corecore