168 research outputs found

    A sub-mW IoT-endnode for always-on visual monitoring and smart triggering

    Full text link
    This work presents a fully-programmable Internet of Things (IoT) visual sensing node that targets sub-mW power consumption in always-on monitoring scenarios. The system features a spatial-contrast 128x64128\mathrm{x}64 binary pixel imager with focal-plane processing. The sensor, when working at its lowest power mode (10μW10\mu W at 10 fps), provides as output the number of changed pixels. Based on this information, a dedicated camera interface, implemented on a low-power FPGA, wakes up an ultra-low-power parallel processing unit to extract context-aware visual information. We evaluate the smart sensor on three always-on visual triggering application scenarios. Triggering accuracy comparable to RGB image sensors is achieved at nominal lighting conditions, while consuming an average power between 193μW193\mu W and 277μW277\mu W, depending on context activity. The digital sub-system is extremely flexible, thanks to a fully-programmable digital signal processing engine, but still achieves 19x lower power consumption compared to MCU-based cameras with significantly lower on-board computing capabilities.Comment: 11 pages, 9 figures, submitteted to IEEE IoT Journa

    Intraframe Scene Capturing and Speed Measurement Based on Superimposed Image: New Sensor Concept for Vehicle Speed Measurement

    Get PDF
    A vision based vehicle speed measurement method is presented in this paper. The proposed intraframe method calculates speed estimates based on a single frame of a single camera. With a special double exposure, a superimposed image can be obtained, where motion blur appears significantly only in the bright regions of the otherwise sharp image. This motion blur contains information of the movement of bright objects during the exposure. Most papers in the field of motion blur are aiming at the removal of this image degradation effect. In this work, we utilize it for a novel speed measurement approach. An applicable sensor structure and exposure-control system are also shown, as well as the applied image processing methods and experimental results. © 2016 Mate Nemeth and Akos Zarandy

    A Coarse imaging sensor for detecting embedded signals in infrared light

    Get PDF
    Machine vision technology has become prevalent in touch technology, however, it is still limited by background noise. To reduce the background noise present in the images of interest it is important to consider the imaging device and the signal source. The architecture, size, sampling scheme, programming, and technology of the imaging device must be considered as well as the response characteristics of the signal source. Several pixel architectures are explained and implemented with discrete components. Their performance was measured through their ability to track a modulated signal source. Potentially, an imaging sensor comprised of a system designed to modulate the light to be imaged could drastically reduce background noise. Further, with a less noisy image, the processing steps required for touch event detection may be simplified

    Optofluidic ultrahigh-throughput detection of fluorescent drops

    Get PDF
    This paper describes an optofluidic droplet interrogation device capable of counting fluorescent drops at a throughput of 254000 drops per second. To our knowledge, this rate is the highest interrogation rate published thus far. Our device consists of 16 parallel microfluidic channels bonded directly to a filter-coated two-dimensional Complementary Metal-Oxide-Semiconductor (CMOS) sensor array. Fluorescence signals emitted from the drops are collected by the sensor that forms the bottom of the channel. The proximity of the drops to the sensor facilitates efficient collection of fluorescence emission from the drops, and overcomes the trade-off between light collection efficiency and field of view in conventional microscopy. The interrogation rate of our device is currently limited by the acquisition speed of CMOS sensor, and is expected to increase further as high-speed sensors become increasingly available

    CMOS Vision Sensors: Embedding Computer Vision at Imaging Front-Ends

    Get PDF
    CMOS Image Sensors (CIS) are key for imaging technol-ogies. These chips are conceived for capturing opticalscenes focused on their surface, and for delivering elec-trical images, commonly in digital format. CISs may incor-porate intelligence; however, their smartness basicallyconcerns calibration, error correction and other similartasks. The term CVISs (CMOS VIsion Sensors) definesother class of sensor front-ends which are aimed at per-forming vision tasks right at the focal plane. They havebeen running under names such as computational imagesensors, vision sensors and silicon retinas, among others. CVIS and CISs are similar regarding physical imple-mentation. However, while inputs of both CIS and CVISare images captured by photo-sensors placed at thefocal-plane, CVISs primary outputs may not be imagesbut either image features or even decisions based on thespatial-temporal analysis of the scenes. We may hencestate that CVISs are more “intelligent” than CISs as theyfocus on information instead of on raw data. Actually,CVIS architectures capable of extracting and interpretingthe information contained in images, and prompting reac-tion commands thereof, have been explored for years inacademia, and industrial applications are recently ramp-ing up.One of the challenges of CVISs architects is incorporat-ing computer vision concepts into the design flow. Theendeavor is ambitious because imaging and computervision communities are rather disjoint groups talking dif-ferent languages. The Cellular Nonlinear Network Univer-sal Machine (CNNUM) paradigm, proposed by Profs.Chua and Roska, defined an adequate framework forsuch conciliation as it is particularly well suited for hard-ware-software co-design [1]-[4]. This paper overviewsCVISs chips that were conceived and prototyped at IMSEVision Lab over the past twenty years. Some of them fitthe CNNUM paradigm while others are tangential to it. Allthem employ per-pixel mixed-signal processing circuitryto achieve sensor-processing concurrency in the quest offast operation with reduced energy budget.Junta de Andalucía TIC 2012-2338Ministerio de Economía y Competitividad TEC 2015-66878-C3-1-R y TEC 2015-66878-C3-3-

    Ultra-Low Power IoT Smart Visual Sensing Devices for Always-ON Applications

    Get PDF
    This work presents the design of a Smart Ultra-Low Power visual sensor architecture that couples together an ultra-low power event-based image sensor with a parallel and power-optimized digital architecture for data processing. By means of mixed-signal circuits, the imager generates a stream of address events after the extraction and binarization of spatial gradients. When targeting monitoring applications, the sensing and processing energy costs can be reduced by two orders of magnitude thanks to either the mixed-signal imaging technology, the event-based data compression and the use of event-driven computing approaches. From a system-level point of view, a context-aware power management scheme is enabled by means of a power-optimized sensor peripheral block, that requests the processor activation only when a relevant information is detected within the focal plane of the imager. When targeting a smart visual node for triggering purpose, the event-driven approach brings a 10x power reduction with respect to other presented visual systems, while leading to comparable results in terms of detection accuracy. To further enhance the recognition capabilities of the smart camera system, this work introduces the concept of event-based binarized neural networks. By coupling together the theory of binarized neural networks and focal-plane processing, a 17.8% energy reduction is demonstrated on a real-world data classification with a performance drop of 3% with respect to a baseline system featuring commercial visual sensors and a Binary Neural Network engine. Moreover, if coupling the BNN engine with the event-driven triggering detection flow, the average power consumption can be as low as the sleep power of 0.3mW in case of infrequent events, which is 8x lower than a smart camera system featuring a commercial RGB imager

    Benchmarking Image Sensors Under Adverse Weather Conditions for Autonomous Driving

    Full text link
    Adverse weather conditions are very challenging for autonomous driving because most of the state-of-the-art sensors stop working reliably under these conditions. In order to develop robust sensors and algorithms, tests with current sensors in defined weather conditions are crucial for determining the impact of bad weather for each sensor. This work describes a testing and evaluation methodology that helps to benchmark novel sensor technologies and compare them to state-of-the-art sensors. As an example, gated imaging is compared to standard imaging under foggy conditions. It is shown that gated imaging outperforms state-of-the-art standard passive imaging due to time-synchronized active illumination

    Real-time image streaming over a low-bandwidth wireless camera network

    Get PDF
    In this paper we describe the recent development of a low-bandwidth wireless camera sensor network. We propose a simple, yet effective, network architecture which allows multiple cameras to be connected to the network and synchronize their communication schedules. Image compression of greater than 90% is performed at each node running on a local DSP coprocessor, resulting in nodes using 1/8th the energy compared to streaming uncompressed images. We briefly introduce the Fleck wireless node and the DSP/camera sensor, and then outline the network architecture and compression algorithm. The system is able to stream color QVGA images over the network to a base station at up to 2 frames per second. © 2007 IEEE

    Airborne Infrared Target Tracking with the Nintendo Wii Remote Sensor

    Get PDF
    Intelligence, surveillance, and reconnaissance unmanned aircraft systems (UAS) are the most common variety of UAS in use today and provide invaluable capabilities to both the military and civil services. Keeping the sensors centered on a point of interest for an extended period of time is a demanding task requiring the full attention and cooperation of the UAS pilot and sensor operator. There is great interest in developing technologies which allow an operator to designate a target and allow the aircraft to automatically maneuver and track the designated target without operator intervention. Presently, the barriers to entry for developing these technologies are high: expertise in aircraft dynamics and control as well as in real- time motion video analysis is required and the cost of the systems required to flight test these technologies is prohibitive. However, if the research intent is purely to develop a vehicle maneuvering controller then it is possible to obviate the video analysis problem entirely. This research presents a solution to the target tracking problem which reliably provides automatic target detection and tracking with low expense and computational overhead by making use of the infrared sensor from a Nintendo Wii Remote Controller
    corecore