8,674 research outputs found

    A sub-mW IoT-endnode for always-on visual monitoring and smart triggering

    Full text link
    This work presents a fully-programmable Internet of Things (IoT) visual sensing node that targets sub-mW power consumption in always-on monitoring scenarios. The system features a spatial-contrast 128x64128\mathrm{x}64 binary pixel imager with focal-plane processing. The sensor, when working at its lowest power mode (10μW10\mu W at 10 fps), provides as output the number of changed pixels. Based on this information, a dedicated camera interface, implemented on a low-power FPGA, wakes up an ultra-low-power parallel processing unit to extract context-aware visual information. We evaluate the smart sensor on three always-on visual triggering application scenarios. Triggering accuracy comparable to RGB image sensors is achieved at nominal lighting conditions, while consuming an average power between 193μW193\mu W and 277μW277\mu W, depending on context activity. The digital sub-system is extremely flexible, thanks to a fully-programmable digital signal processing engine, but still achieves 19x lower power consumption compared to MCU-based cameras with significantly lower on-board computing capabilities.Comment: 11 pages, 9 figures, submitteted to IEEE IoT Journa

    Design Of Neural Network Circuit Inside High Speed Camera Using Analog CMOS 0.35 ¼m Technology

    Get PDF
    Analog VLSI on-chip learning Neural Networks represent a mature technology for a large number of applications involving industrial as well as consumer appliances. This is particularly the case when low power consumption, small size and/or very high speed are required. This approach exploits the computational features of Neural Networks, the implementation efficiency of analog VLSI circuits and the adaptation capabilities of the on-chip learning feedback schema. High-speed video cameras are powerful tools for investigating for instance the biomechanics analysis or the movements of mechanical parts in manufacturing processes. In the past years, the use of CMOS sensors instead of CCDs has enabled the development of high-speed video cameras offering digital outputs , readout flexibility, and lower manufacturing costs. In this paper, we propose a high-speed smart camera based on a CMOS sensor with embedded Analog Neural Network

    Form Factor Improvement of Smart-Pixels for Vision Sensors through 3-D Vertically- Integrated Technologies

    Get PDF
    While conventional CMOS active pixel sensors embed only the circuitry required for photo-detection, pixel addressing and voltage buffering, smart pixels incorporate also circuitry for data processing, data storage and control of data interchange. This additional circuitry enables data processing be realized concurrently with the acquisition of images which is instrumental to reduce the number of data needed to carry to information contained into images. This way, more efficient vision systems can be built at the cost of larger pixel pitch. Vertically-integrated 3D technologies enable to keep the advnatges of smart pixels while improving the form factor of smart pixels.Office of Naval Research N000141110312Ministerio de Ciencia e Innovación IPT-2011-1625-43000

    Image Sensors in Security and Medical Applications

    Get PDF
    This paper briefly reviews CMOS image sensor technology and its utilization in security and medical applications. The role and future trends of image sensors in each of the applications are discussed. To provide the reader deeper understanding of the technology aspects the paper concentrates on the selected applications such as surveillance, biometrics, capsule endoscopy and artificial retina. The reasons for concentrating on these applications are due to their importance in our daily life and because they present leading-edge applications for imaging systems research and development. In addition, review of image sensors implementation in these applications allows the reader to investigate image sensor technology from the technical and from other views as well

    Spatially Smart Optical Sensing and Scanning

    Get PDF
    Methods, devices and systems of an optical sensor for spatially smart 3-D object measurements using variable focal length lenses to target both specular and diffuse objects by matching transverse dimensions of the sampling optical beam to the transverse size of the flat target for given axial target distance for instantaneous spatial mapping of flat target, zone. The sensor allows volumetric data compressed remote sensing of object transverse dimensions including cross-sectional size, motion transverse displacement, inter-objects transverse gap distance, 3-D animation data acquisition, laser-based 3-D machining, and 3-D inspection and testing. An embodiment provides a 2-D optical display using 2-D laser scanning and 3-D beam forming optics engaged with sensor optics to measure distance of display screen from the laser source and scanning optics by adjusting its focus to produce the smallest focused beam spot on the display screen. With known screen distance, the angular scan range for the scan mirrors can be computed to generate the number of scanned spots in the 2-D display

    Experimental and simulation study results for video landmark acquisition and tracking technology

    Get PDF
    A synopsis of related Earth observation technology is provided and includes surface-feature tracking, generic feature classification and landmark identification, and navigation by multicolor correlation. With the advent of the Space Shuttle era, the NASA role takes on new significance in that one can now conceive of dedicated Earth resources missions. Space Shuttle also provides a unique test bed for evaluating advanced sensor technology like that described in this report. As a result of this type of rationale, the FILE OSTA-1 Shuttle experiment, which grew out of the Video Landmark Acquisition and Tracking (VILAT) activity, was developed and is described in this report along with the relevant tradeoffs. In addition, a synopsis of FILE computer simulation activity is included. This synopsis relates to future required capabilities such as landmark registration, reacquisition, and tracking
    corecore