4,198 research outputs found

    ENERGY-EFFICIENT LIGHTWEIGHT ALGORITHMS FOR EMBEDDED SMART CAMERAS: DESIGN, IMPLEMENTATION AND PERFORMANCE ANALYSIS

    Get PDF
    An embedded smart camera is a stand-alone unit that not only captures images, but also includes a processor, memory and communication interface. Battery-powered, embedded smart cameras introduce many additional challenges since they have very limited resources, such as energy, processing power and memory. When camera sensors are added to an embedded system, the problem of limited resources becomes even more pronounced. Hence, computer vision algorithms running on these camera boards should be light-weight and efficient. This thesis is about designing and developing computer vision algorithms, which are aware and successfully overcome the limitations of embedded platforms (in terms of power consumption and memory usage). Particularly, we are interested in object detection and tracking methodologies and the impact of them on the performance and battery life of the CITRIC camera (embedded smart camera employed in this research). This thesis aims to prolong the life time of the Embedded Smart platform, without affecting the reliability of the system during surveillance tasks. Therefore, the reader is walked through the whole designing process, from the development and simulation, followed by the implementation and optimization, to the testing and performance analysis. The work presented in this thesis carries out not only software optimization, but also hardware-level operations during the stages of object detection and tracking. The performance of the algorithms introduced in this thesis are comparable to state-of-the-art object detection and tracking methods, such as Mixture of Gaussians, Eigen segmentation, color and coordinate tracking. Unlike the traditional methods, the newly-designed algorithms present notable reduction of the memory requirements, as well as the reduction of memory accesses per pixel. To accomplish the proposed goals, this work attempts to interconnect different levels of the embedded system architecture to make the platform more efficient in terms of energy and resource savings. Thus, the algorithms proposed are optimized at the API, middleware, and hardware levels to access the pixel information of the CMOS sensor directly. Only the required pixels are acquired in order to reduce the unnecessary communications overhead. Experimental results show that when exploiting the architecture capabilities of an embedded platform, 41.24% decrease in energy consumption, and 107.2% increase in battery-life can be accomplished. Compared to traditional object detection and tracking methods, the proposed work provides an additional 8 hours of continuous processing on 4 AA batteries, increasing the lifetime of the camera to 15.5 hours

    A sub-mW IoT-endnode for always-on visual monitoring and smart triggering

    Full text link
    This work presents a fully-programmable Internet of Things (IoT) visual sensing node that targets sub-mW power consumption in always-on monitoring scenarios. The system features a spatial-contrast 128x64128\mathrm{x}64 binary pixel imager with focal-plane processing. The sensor, when working at its lowest power mode (10ÎĽW10\mu W at 10 fps), provides as output the number of changed pixels. Based on this information, a dedicated camera interface, implemented on a low-power FPGA, wakes up an ultra-low-power parallel processing unit to extract context-aware visual information. We evaluate the smart sensor on three always-on visual triggering application scenarios. Triggering accuracy comparable to RGB image sensors is achieved at nominal lighting conditions, while consuming an average power between 193ÎĽW193\mu W and 277ÎĽW277\mu W, depending on context activity. The digital sub-system is extremely flexible, thanks to a fully-programmable digital signal processing engine, but still achieves 19x lower power consumption compared to MCU-based cameras with significantly lower on-board computing capabilities.Comment: 11 pages, 9 figures, submitteted to IEEE IoT Journa

    Towards self-powered wireless sensor networks

    Get PDF
    Ubiquitous computing aims at creating smart environments in which computational and communication capabilities permeate the word at all scales, improving the human experience and quality of life in a totally unobtrusive yet completely reliable manner. According to this vision, an huge variety of smart devices and products (e.g., wireless sensor nodes, mobile phones, cameras, sensors, home appliances and industrial machines) are interconnected to realize a network of distributed agents that continuously collect, process, share and transport information. The impact of such technologies in our everyday life is expected to be massive, as it will enable innovative applications that will profoundly change the world around us. Remotely monitoring the conditions of patients and elderly people inside hospitals and at home, preventing catastrophic failures of buildings and critical structures, realizing smart cities with sustainable management of traffic and automatic monitoring of pollution levels, early detecting earthquake and forest fires, monitoring water quality and detecting water leakages, preventing landslides and avalanches are just some examples of life-enhancing applications made possible by smart ubiquitous computing systems. To turn this vision into a reality, however, new raising challenges have to be addressed, overcoming the limits that currently prevent the pervasive deployment of smart devices that are long lasting, trusted, and fully autonomous. In particular, the most critical factor currently limiting the realization of ubiquitous computing is energy provisioning. In fact, embedded devices are typically powered by short-lived batteries that severely affect their lifespan and reliability, often requiring expensive and invasive maintenance. In this PhD thesis, we investigate the use of energy-harvesting techniques to overcome the energy bottleneck problem suffered by embedded devices, particularly focusing on Wireless Sensor Networks (WSNs), which are one of the key enablers of pervasive computing systems. Energy harvesting allows to use energy readily available from the environment (e.g., from solar light, wind, body movements, etc.) to significantly extend the typical lifetime of low-power devices, enabling ubiquitous computing systems that can last virtually forever. However, the design challenges posed both at the hardware and at the software levels by the design of energy-autonomous devices are many. This thesis addresses some of the most challenging problems of this emerging research area, such as devising mechanisms for energy prediction and management, improving the efficiency of the energy scavenging process, developing protocols for harvesting-aware resource allocation, and providing solutions that enable robust and reliable security support. %, including the design of mechanisms for energy prediction and management, improving the efficiency of the energy harvesting process, the develop of protocols for harvesting-aware resource allocation, and providing solutions that enable robust and reliable security support

    E3^3Pose: Energy-Efficient Edge-assisted Multi-camera System for Multi-human 3D Pose Estimation

    Full text link
    Multi-human 3D pose estimation plays a key role in establishing a seamless connection between the real world and the virtual world. Recent efforts adopted a two-stage framework that first builds 2D pose estimations in multiple camera views from different perspectives and then synthesizes them into 3D poses. However, the focus has largely been on developing new computer vision algorithms on the offline video datasets without much consideration on the energy constraints in real-world systems with flexibly-deployed and battery-powered cameras. In this paper, we propose an energy-efficient edge-assisted multiple-camera system, dubbed E3^3Pose, for real-time multi-human 3D pose estimation, based on the key idea of adaptive camera selection. Instead of always employing all available cameras to perform 2D pose estimations as in the existing works, E3^3Pose selects only a subset of cameras depending on their camera view qualities in terms of occlusion and energy states in an adaptive manner, thereby reducing the energy consumption (which translates to extended battery lifetime) and improving the estimation accuracy. To achieve this goal, E3^3Pose incorporates an attention-based LSTM to predict the occlusion information of each camera view and guide camera selection before cameras are selected to process the images of a scene, and runs a camera selection algorithm based on the Lyapunov optimization framework to make long-term adaptive selection decisions. We build a prototype of E3^3Pose on a 5-camera testbed, demonstrate its feasibility and evaluate its performance. Our results show that a significant energy saving (up to 31.21%) can be achieved while maintaining a high 3D pose estimation accuracy comparable to state-of-the-art methods

    Occupancy Estimation in Smart Building using Hybrid CO2/Light Wireless Sensor Network

    Get PDF
    Smart building, which delivers useful services to residents at lowest cost and maximum comfort, has gained increasing attention in recent years. A variety of emerging information technologies have been adopted in modern buildings, such as wireless sensor networks, internet of things, big data analytics, deep machine learning, etc. Most people agree that a smart building should be energy efficient, and consequently, much more affordable to building owners. Building operation accounts for major portion of energy consumption in the United States. HVAC (heating, ventilating, and air conditioning) equipment is a particularly expensive and energy consuming of building operation. As a result, the concept of “demand-driven HVAC control” is currently a growing research topic for smart buildings. In this work, we investigated the issue of building occupancy estimation by using a wireless CO2 sensor network. The concentration level of indoor CO2 is a good indicator of the number of room occupants, while protecting the personal privacy of building residents. Once indoor CO2 level is observed, HVAC equipment is aware of the number of room occupants. HVAC equipment can adjust its operation parameters to fit demands of these occupants. Thus, the desired quality of service is guaranteed with minimum energy dissipation. Excessive running of HVAC fans or pumps will be eliminated to conserve energy. Hence, the energy efficiency of smart building is improved significantly and the building operation becomes more intelligent. The wireless sensor network was selected for this study, because it is tiny, cost effective, non-intrusive, easy to install and flexible to configure. In this work, we integrated CO2 and light senors with a wireless sensor platform from Texas Instruments. Compare with existing occupancy detection methods, our proposed hybrid scheme achieves higher accuracy, while keeping low cost and non-intrusiveness. Experimental results in an office environment show full functionality and validate benefits. This study paves the way for future research, where a wireless CO2 sensor network is connected with HVAC systems to realize fine-grained, energy efficient smart building

    Human behavioural analysis with self-organizing map for ambient assisted living

    Get PDF
    This paper presents a system for automatically classifying the resting location of a moving object in an indoor environment. The system uses an unsupervised neural network (Self Organising Feature Map) fully implemented on a low-cost, low-power automated home-based surveillance system, capable of monitoring activity level of elders living alone independently. The proposed system runs on an embedded platform with a specialised ceiling-mounted video sensor for intelligent activity monitoring. The system has the ability to learn resting locations, to measure overall activity levels and to detect specific events such as potential falls. First order motion information, including first order moving average smoothing, is generated from the 2D image coordinates (trajectories). A novel edge-based object detection algorithm capable of running at a reasonable speed on the embedded platform has been developed. The classification is dynamic and achieved in real-time. The dynamic classifier is achieved using a SOFM and a probabilistic model. Experimental results show less than 20% classification error, showing the robustness of our approach over others in literature with minimal power consumption. The head location of the subject is also estimated by a novel approach capable of running on any resource limited platform with power constraints

    Supporting UAVs with Edge Computing: A Review of Opportunities and Challenges

    Full text link
    Over the last years, Unmanned Aerial Vehicles (UAVs) have seen significant advancements in sensor capabilities and computational abilities, allowing for efficient autonomous navigation and visual tracking applications. However, the demand for computationally complex tasks has increased faster than advances in battery technology. This opens up possibilities for improvements using edge computing. In edge computing, edge servers can achieve lower latency responses compared to traditional cloud servers through strategic geographic deployments. Furthermore, these servers can maintain superior computational performance compared to UAVs, as they are not limited by battery constraints. Combining these technologies by aiding UAVs with edge servers, research finds measurable improvements in task completion speed, energy efficiency, and reliability across multiple applications and industries. This systematic literature review aims to analyze the current state of research and collect, select, and extract the key areas where UAV activities can be supported and improved through edge computing
    • …
    corecore