122,560 research outputs found

    Event-based State Estimation: An Emulation-based Approach

    Full text link
    An event-based state estimation approach for reducing communication in a networked control system is proposed. Multiple distributed sensor agents observe a dynamic process and sporadically transmit their measurements to estimator agents over a shared bus network. Local event-triggering protocols ensure that data is transmitted only when necessary to meet a desired estimation accuracy. The event-based design is shown to emulate the performance of a centralised state observer design up to guaranteed bounds, but with reduced communication. The stability results for state estimation are extended to the distributed control system that results when the local estimates are used for feedback control. Results from numerical simulations and hardware experiments illustrate the effectiveness of the proposed approach in reducing network communication.Comment: 21 pages, 8 figures, this article is based on the technical report arXiv:1511.05223 and is accepted for publication in IET Control Theory & Application

    Resource-aware IoT Control: Saving Communication through Predictive Triggering

    Full text link
    The Internet of Things (IoT) interconnects multiple physical devices in large-scale networks. When the 'things' coordinate decisions and act collectively on shared information, feedback is introduced between them. Multiple feedback loops are thus closed over a shared, general-purpose network. Traditional feedback control is unsuitable for design of IoT control because it relies on high-rate periodic communication and is ignorant of the shared network resource. Therefore, recent event-based estimation methods are applied herein for resource-aware IoT control allowing agents to decide online whether communication with other agents is needed, or not. While this can reduce network traffic significantly, a severe limitation of typical event-based approaches is the need for instantaneous triggering decisions that leave no time to reallocate freed resources (e.g., communication slots), which hence remain unused. To address this problem, novel predictive and self triggering protocols are proposed herein. From a unified Bayesian decision framework, two schemes are developed: self triggers that predict, at the current triggering instant, the next one; and predictive triggers that check at every time step, whether communication will be needed at a given prediction horizon. The suitability of these triggers for feedback control is demonstrated in hardware experiments on a cart-pole, and scalability is discussed with a multi-vehicle simulation.Comment: 16 pages, 15 figures, accepted article to appear in IEEE Internet of Things Journal. arXiv admin note: text overlap with arXiv:1609.0753

    Detecting malicious data injections in event detection wireless sensor networks

    Get PDF

    Event-triggered Learning

    Full text link
    The efficient exchange of information is an essential aspect of intelligent collective behavior. Event-triggered control and estimation achieve some efficiency by replacing continuous data exchange between agents with intermittent, or event-triggered communication. Typically, model-based predictions are used at times of no data transmission, and updates are sent only when the prediction error grows too large. The effectiveness in reducing communication thus strongly depends on the quality of the prediction model. In this article, we propose event-triggered learning as a novel concept to reduce communication even further and to also adapt to changing dynamics. By monitoring the actual communication rate and comparing it to the one that is induced by the model, we detect a mismatch between model and reality and trigger model learning when needed. Specifically, for linear Gaussian dynamics, we derive different classes of learning triggers solely based on a statistical analysis of inter-communication times and formally prove their effectiveness with the aid of concentration inequalities

    First upper limits from LIGO on gravitational wave bursts

    Get PDF
    We report on a search for gravitational wave bursts using data from the first science run of the LIGO detectors. Our search focuses on bursts with durations ranging from 4 ms to 100 ms, and with significant power in the LIGO sensitivity band of 150 to 3000 Hz. We bound the rate for such detected bursts at less than 1.6 events per day at 90% confidence level. This result is interpreted in terms of the detection efficiency for ad hoc waveforms (Gaussians and sine-Gaussians) as a function of their root-sum-square strain h_{rss}; typical sensitivities lie in the range h_{rss} ~ 10^{-19} - 10^{-17} strain/rtHz, depending on waveform. We discuss improvements in the search method that will be applied to future science data from LIGO and other gravitational wave detectors.Comment: 21 pages, 15 figures, accepted by Phys Rev D. Fixed a few small typos and updated a few reference
    • …
    corecore