4 research outputs found

    A sub-mW IoT-endnode for always-on visual monitoring and smart triggering

    Full text link
    This work presents a fully-programmable Internet of Things (IoT) visual sensing node that targets sub-mW power consumption in always-on monitoring scenarios. The system features a spatial-contrast 128x64128\mathrm{x}64 binary pixel imager with focal-plane processing. The sensor, when working at its lowest power mode (10μW10\mu W at 10 fps), provides as output the number of changed pixels. Based on this information, a dedicated camera interface, implemented on a low-power FPGA, wakes up an ultra-low-power parallel processing unit to extract context-aware visual information. We evaluate the smart sensor on three always-on visual triggering application scenarios. Triggering accuracy comparable to RGB image sensors is achieved at nominal lighting conditions, while consuming an average power between 193μW193\mu W and 277μW277\mu W, depending on context activity. The digital sub-system is extremely flexible, thanks to a fully-programmable digital signal processing engine, but still achieves 19x lower power consumption compared to MCU-based cameras with significantly lower on-board computing capabilities.Comment: 11 pages, 9 figures, submitteted to IEEE IoT Journa

    Video Surveillance Applications based on ultra-low power sensors

    Get PDF
    International audiencePower consumption is an important goal for many applica- tions, expecially when the power can be wasted doing nothing. Video surveillance is one of this application where the camera can be on for long period without "see" nothing. For this reason several power man- agement techniques were carried out in order to reduce the activities of the camera when it is not needed. In this work we focus on surveillance applications performed through Video Surveillance Camera (VSC) that are not permanently active, but need to be properly "woken-up", by speci c ultra Low Power wireless Sensor Nodes (LPSN) able to monitor continuously the area. named. The LPSN are equipped by Piezoelectric "Passive" Infrared (PIR) sensors to detect the movement, thus they have a speci c transmission range (to wirelessly send the "wake-up" messages to the camera sensor device) and a sensing range to detect events of in- terest (i.e. a man that crosses a speci c area). Di erent deployments may highly impact not only in terms of events detectable, but also in terms of the number of VDS that can be woken-up. In this work, we propose a neural/genetic algorithm, that tries to compute the best deployment of the LPSN, based on two weight factors that "prioritize" the rst ob- jective, that is the number of VSC that can be woken-up or the second objective, namely the events detectable. The two objectives can be op- posite and based on the di erent values assigned to the weight factors, di erent deployments can be obtained. The performance evaluation is realized through a simulation tool and we will show the e ectiveness of our approach to reach very e ective deployments in di erent scenarios

    Multimodal Video Analysis on Self-Powered Resource-Limited Wireless Smart Camera

    No full text
    none5noSurveillance is one of the most promising applications for wireless sensor networks, stimulated by a confluence of simultaneous advances in key disciplines: computer vision, image sensors, embedded computing, energy harvesting, and sensor networks. However, computer vision typically requires notable amounts of computing performance, a considerable memory footprint and high power consumption. Thus, wireless smart cameras pose a challenge to current hardware capabilities in terms of low-power consumption and high imaging performance. For this reason, wireless surveillance systems still require considerable amount of research in different areas such as mote architectures, video processing algorithms, power management, energy harvesting and distributed engine. In this paper, we introduce a multimodal wireless smart camera equipped with a pyroelectric infrared sensor and solar energy harvester. The aim of this work is to achieve the following goals: 1) combining local processing, low power hardware design, power management and energy harvesting to develop a low-power, low-cost, power-aware, and self-sustainable wireless video sensor node for video processing on board; 2) develop an energy efficient smart camera with high accuracy abandoned/removed object detection capability. The efficiency of our approach is demonstrated by experimental results in terms of power consumption and video processing accuracy as well as in terms of self-sustainability. Finally, simulation results show how perpetual work can be achieved in an outdoor scenario within a typical video surveillance application dealing with abandoned/removed object detection.mixedMichele Magno;Federico Tombari;Davide Brunelli;Luigi Di Stefano;Luca BeniniMichele Magno;Federico Tombari;Davide Brunelli;Luigi Di Stefano;Luca Benin

    Ultra-Low Power IoT Smart Visual Sensing Devices for Always-ON Applications

    Get PDF
    This work presents the design of a Smart Ultra-Low Power visual sensor architecture that couples together an ultra-low power event-based image sensor with a parallel and power-optimized digital architecture for data processing. By means of mixed-signal circuits, the imager generates a stream of address events after the extraction and binarization of spatial gradients. When targeting monitoring applications, the sensing and processing energy costs can be reduced by two orders of magnitude thanks to either the mixed-signal imaging technology, the event-based data compression and the use of event-driven computing approaches. From a system-level point of view, a context-aware power management scheme is enabled by means of a power-optimized sensor peripheral block, that requests the processor activation only when a relevant information is detected within the focal plane of the imager. When targeting a smart visual node for triggering purpose, the event-driven approach brings a 10x power reduction with respect to other presented visual systems, while leading to comparable results in terms of detection accuracy. To further enhance the recognition capabilities of the smart camera system, this work introduces the concept of event-based binarized neural networks. By coupling together the theory of binarized neural networks and focal-plane processing, a 17.8% energy reduction is demonstrated on a real-world data classification with a performance drop of 3% with respect to a baseline system featuring commercial visual sensors and a Binary Neural Network engine. Moreover, if coupling the BNN engine with the event-driven triggering detection flow, the average power consumption can be as low as the sleep power of 0.3mW in case of infrequent events, which is 8x lower than a smart camera system featuring a commercial RGB imager
    corecore