30,088 research outputs found

    Computation-Communication Trade-offs and Sensor Selection in Real-time Estimation for Processing Networks

    Full text link
    Recent advances in electronics are enabling substantial processing to be performed at each node (robots, sensors) of a networked system. Local processing enables data compression and may mitigate measurement noise, but it is still slower compared to a central computer (it entails a larger computational delay). However, while nodes can process the data in parallel, the centralized computational is sequential in nature. On the other hand, if a node sends raw data to a central computer for processing, it incurs communication delay. This leads to a fundamental communication-computation trade-off, where each node has to decide on the optimal amount of preprocessing in order to maximize the network performance. We consider a network in charge of estimating the state of a dynamical system and provide three contributions. First, we provide a rigorous problem formulation for optimal real-time estimation in processing networks in the presence of delays. Second, we show that, in the case of a homogeneous network (where all sensors have the same computation) that monitors a continuous-time scalar linear system, the optimal amount of local preprocessing maximizing the network estimation performance can be computed analytically. Third, we consider the realistic case of a heterogeneous network monitoring a discrete-time multi-variate linear system and provide algorithms to decide on suitable preprocessing at each node, and to select a sensor subset when computational constraints make using all sensors suboptimal. Numerical simulations show that selecting the sensors is crucial. Moreover, we show that if the nodes apply the preprocessing policy suggested by our algorithms, they can largely improve the network estimation performance.Comment: 15 pages, 16 figures. Accepted journal versio

    PMU-Based ROCOF Measurements: Uncertainty Limits and Metrological Significance in Power System Applications

    Full text link
    In modern power systems, the Rate-of-Change-of-Frequency (ROCOF) may be largely employed in Wide Area Monitoring, Protection and Control (WAMPAC) applications. However, a standard approach towards ROCOF measurements is still missing. In this paper, we investigate the feasibility of Phasor Measurement Units (PMUs) deployment in ROCOF-based applications, with a specific focus on Under-Frequency Load-Shedding (UFLS). For this analysis, we select three state-of-the-art window-based synchrophasor estimation algorithms and compare different signal models, ROCOF estimation techniques and window lengths in datasets inspired by real-world acquisitions. In this sense, we are able to carry out a sensitivity analysis of the behavior of a PMU-based UFLS control scheme. Based on the proposed results, PMUs prove to be accurate ROCOF meters, as long as the harmonic and inter-harmonic distortion within the measurement pass-bandwidth is scarce. In the presence of transient events, the synchrophasor model looses its appropriateness as the signal energy spreads over the entire spectrum and cannot be approximated as a sequence of narrow-band components. Finally, we validate the actual feasibility of PMU-based UFLS in a real-time simulated scenario where we compare two different ROCOF estimation techniques with a frequency-based control scheme and we show their impact on the successful grid restoration.Comment: Manuscript IM-18-20133R. Accepted for publication on IEEE Transactions on Instrumentation and Measurement (acceptance date: 9 March 2019

    Predictive Duty Cycle Adaptation for Wireless Camera Networks

    Get PDF
    Wireless sensor networks (WSN) typically employ dynamic duty cycle schemes to efficiently handle different patterns of communication traffic in the network. However, existing duty cycling approaches are not suitable for event-driven WSN, in particular, camera-based networks designed to track humans and objects. A characteristic feature of such networks is the spatially-correlated bursty traffic that occurs in the vicinity of potentially highly mobile objects. In this paper, we propose a concept of indirect sensing in the MAC layer of a wireless camera network and an active duty cycle adaptation scheme based on Kalman filter that continuously predicts and updates the location of the object that triggers bursty communication traffic in the network. This prediction allows the camera nodes to alter their communication protocol parameters prior to the actual increase in the communication traffic. Our simulations demonstrate that our active adaptation strategy outperforms TMAC not only in terms of energy efficiency and communication latency, but also in terms of TIBPEA, a QoS metric for event-driven WSN

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Low latency vision-based control for robotics : a thesis presented in partial fulfilment of the requirements for the degree of Master of Engineering in Mechatronics at Massey University, Manawatu, New Zealand

    Get PDF
    In this work, the problem of controlling a high-speed dynamic tracking and interception system using computer vision as the measurement unit was explored. High-speed control systems alone present many challenges, and these challenges are compounded when combined with the high volume of data processing required by computer vision systems. A semi-automated foosball table was chosen as the test-bed system because it combines all the challenges associated with a vision-based control system into a single platform. While computer vision is extremely useful and can solve many problems, it can also introduce many problems such as latency, the need for lens and spatial calibration, potentially high power consumption, and high cost. The objective of this work is to explore how to implement computer vision as the measurement unit in a high-speed controller, while minimising latencies caused by the vision itself, communication interfaces, data processing/strategy, instruction execution, and actuator control. Another objective was to implement the solution in one low-latency, low power, low cost embedded system. A field programmable gate array (FPGA) system on chip (SoC), which combines programmable digital logic with a dual core ARM processor (HPS) on the same chip, was hypothesised to be capable of running the described vision-based control system. The FPGA was used to perform streamed image pre-processing, concurrent stepper motor control and provide communication channels for user input, while the HPS performed the lens distortion mapping, intercept calculation and “strategy” control tasks, as well as controlling overall function of the system. Individual vision systems were compared for latency performance. Interception performance of the semi-automated foosball table was then tested for straight, moderate-speed shots with limited view time, and latency was artificially added to the system and the interception results for the same, centre-field shot tested with a variety of different added latencies. The FPGA based system performed the best in both steady-state latency, and novel event detection latency tests. The developed stepper motor control modules performed well in terms of speed, smoothness, resource consumption, and versatility. They are capable of constant velocity, constant acceleration and variable acceleration profiles, as well as being completely parameterisable. The interception modules on the foosball table achieved a 100% interception rate, with a confidence interval of 95%, and reliability of 98.4%. As artificial latency was added to the system, the performance dropped in terms of overall number of successful intercepts. The decrease in performance was roughly linear with a 60% in reduction in performance caused by 100 ms of added latency. Performance dropped to 0% successful intercepts when 166 ms of latency was added. The implications of this work are that FPGA SoC technology may, in future, enable computer vision to be used as a general purpose, high-speed measurement system for a wide variety of control problems

    A Hierarchical Framework of Cloud Resource Allocation and Power Management Using Deep Reinforcement Learning

    Full text link
    Automatic decision-making approaches, such as reinforcement learning (RL), have been applied to (partially) solve the resource allocation problem adaptively in the cloud computing system. However, a complete cloud resource allocation framework exhibits high dimensions in state and action spaces, which prohibit the usefulness of traditional RL techniques. In addition, high power consumption has become one of the critical concerns in design and control of cloud computing systems, which degrades system reliability and increases cooling cost. An effective dynamic power management (DPM) policy should minimize power consumption while maintaining performance degradation within an acceptable level. Thus, a joint virtual machine (VM) resource allocation and power management framework is critical to the overall cloud computing system. Moreover, novel solution framework is necessary to address the even higher dimensions in state and action spaces. In this paper, we propose a novel hierarchical framework for solving the overall resource allocation and power management problem in cloud computing systems. The proposed hierarchical framework comprises a global tier for VM resource allocation to the servers and a local tier for distributed power management of local servers. The emerging deep reinforcement learning (DRL) technique, which can deal with complicated control problems with large state space, is adopted to solve the global tier problem. Furthermore, an autoencoder and a novel weight sharing structure are adopted to handle the high-dimensional state space and accelerate the convergence speed. On the other hand, the local tier of distributed server power managements comprises an LSTM based workload predictor and a model-free RL based power manager, operating in a distributed manner.Comment: accepted by 37th IEEE International Conference on Distributed Computing (ICDCS 2017
    • …
    corecore