1,648,167 research outputs found

    Knowledge-infused and Consistent Complex Event Processing over Real-time and Persistent Streams

    Full text link
    Emerging applications in Internet of Things (IoT) and Cyber-Physical Systems (CPS) present novel challenges to Big Data platforms for performing online analytics. Ubiquitous sensors from IoT deployments are able to generate data streams at high velocity, that include information from a variety of domains, and accumulate to large volumes on disk. Complex Event Processing (CEP) is recognized as an important real-time computing paradigm for analyzing continuous data streams. However, existing work on CEP is largely limited to relational query processing, exposing two distinctive gaps for query specification and execution: (1) infusing the relational query model with higher level knowledge semantics, and (2) seamless query evaluation across temporal spaces that span past, present and future events. These allow accessible analytics over data streams having properties from different disciplines, and help span the velocity (real-time) and volume (persistent) dimensions. In this article, we introduce a Knowledge-infused CEP (X-CEP) framework that provides domain-aware knowledge query constructs along with temporal operators that allow end-to-end queries to span across real-time and persistent streams. We translate this query model to efficient query execution over online and offline data streams, proposing several optimizations to mitigate the overheads introduced by evaluating semantic predicates and in accessing high-volume historic data streams. The proposed X-CEP query model and execution approaches are implemented in our prototype semantic CEP engine, SCEPter. We validate our query model using domain-aware CEP queries from a real-world Smart Power Grid application, and experimentally analyze the benefits of our optimizations for executing these queries, using event streams from a campus-microgrid IoT deployment.Comment: 34 pages, 16 figures, accepted in Future Generation Computer Systems, October 27, 201

    Combined Scheduling of Time-Triggered Plans and Priority Scheduled Task Sets

    Full text link
    © Owner/Author (2016). This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in ACM SIGAda Ada Letters, 36(1), 68-76, http://dx.doi.org/10.1145/10.1145/2971571.2971580.[EN] Preemptive, priority-based scheduling on the one hand, and time-triggered scheduling on the other, are the two major techniques in use for development of real-time and embedded software. Both have their advantages and drawbacks with respect to the other, and are commonly adopted in mutual exclusion. In a previous paper, we proposed a software architecture that enables the combined and controlled execution of time-triggered plans and priority-scheduled tasks. The goal was to take advantage of the best of both approaches by providing deterministic, jitter-controlled execution of time-triggered tasks (e.g., control tasks), coexisting with a set of priority-scheduled tasks, with less demanding jitter requirements. In this paper, we briefly describe the approach, in which the time-triggered plan is executed at the highest priority level, controlled by scheduling decisions taken only at particular points in time, signalled by recurrent timing events. The rest of priority levels are used by a set of concurrent tasks scheduled by static or dynamic priorities. We also discuss several open issues such as schedulability analysis, use of the approach in multiprocessor architectures, usability in mixed-criticality systems and needed changes to make this approach Ravenscar compliant.This work has been partly supported by the Spanish Government’s project M2C2 (TIN2014-56158-C4-1-P-AR) and the European Commission’s project EMC2 (ARTEMIS-JU Call 2013 AIPP-5, Contract 621429).Real Sáez, JV.; Sáez Barona, S.; Crespo Lorente, A. (2016). Combined Scheduling of Time-Triggered Plans and Priority Scheduled Task Sets. Ada Letters. 36(1):68-76. https://doi.org/10.1145/2971571.2971580S6876361T. P. Baker and A. Shaw. The cyclic executive model and Ada. In Proceedings IEEE Real Time Systems Symposium 1988, Huntsville, Alabama, pages 120--129, 1988.P. Balbastre, I. Ripoll, J. Vidal, and A. Crespo. A Task Model to Reduce Control Delays. Real-Time Systems, 27(3):215--236, September 2004.A. Burns and R. Davis. Mixed Criticality Systems - A Review. Technical report, Depatment of Computer Science, University of York, 2013.A. Cervin. Integrated Control and Real-Time Scheduling. PhD thesis, Lund Institute of Technology, April 2003.R. Dobrin. Combining Offline Schedule Construction and Fixed Priority Scheduling in Real-Time Computer Systems. PhD thesis, Mälardalen University, 2005.S. Hong, X. Hu, and M. Lemmon. Reducing Delay Jitter of Real-Time Control Tasks through Adaptive Deadline Adjustments. In IEEE Computer Society, editor, 22nd Euromicro Conference on Real-Time Systems -- ECRTS, pages 229--238, 2010.J. W. S. Liu. Real-Time Systems. Prentice-Hall Inc., 2000.J. Palencia and M. González-Harbour. Schedulability Analysis for Tasks with Static and Dynamic Offsets. In 9th IEEE Real-Time Systems Symposium, 1998.M. J. Pont. The Engineering of Reliable Embedded Systems: LPC1769 edition. Number ISBN: 978-0-9930355-0-0. SafeTTy Systems Limited, 2014.J. Real and A. Crespo. Incorporating Operating Modes to an Ada Real-Time Framework. Ada Letters, 30(1):73--85, April 2010.J. Real, S. Sáez, and A. Crespo. Combining time-triggered plans with priority scheduled task sets. In M. Bertogna and L. M. Pinho, editors, Reliable Software Technologies -- Ada-Europe 2016, volume 9695 of Lecture Notes in Computer Science. Springer, June 2016.S. Sáez, J. Real, and A. Crespo. An integrated framework for multiprocessor, multimoded real-time applications. In M. Brorsson and L. Pinho, editors, Reliable Software Technologies -- Ada-Europe 2012, volume 7308, pages 18--34. Springer-Verlag, June 2012.S. Sáez, J. Real, and A. Crespo. Implementation of Timing-Event Anities in Ada/Linux. Ada Letters, 35(1), April 2015.A. J. Wellings and A. Burns. A Framework for Real-Time Utilities for Ada 2005. Ada Letters, XXVII(2), August 2007

    On-line monitoring of water distribution networks

    Get PDF
    This thesis is concerned with the development of a computer-based, real-time monitoring scheme which is a prerequisite of any form of on-line control. A new concept, in the field of water distribution systems, of water system state estimation is introduced. Its function is to process redundant, noise-corrupted telemeasurements in order to supply a real-time data base with reliable estimates of the current state and structure of the network. The information provided by the estimator can then be used in a number of on-line programs. In view of the strong nonlinearity of the network equations, two methods of state estimation, which have enhanced numerical stability, are examined in this thesis. The first method uses an augmented matrix formulation of a classical least-squares problem, and the second is based on a least absolute value solution of an over determined set of equations. Two water systems, one of which is a realistic 34-node network, are used to evaluate the performance of the proposed methods .The problem of bad data processing and its extension to the validation of network topology and leakage detection is also examined. It is shown that the method based on least absolute values estimation provides a more immediate indication of erroneous measurements. In addition, this method demonstrates the useful feature of eliminating the effects of gross errors on the final state estimate. The important question of water system observability is then studied. Two original combinatorial methods are proposed to check topological observability. The first one is an indirect technique which searches for a maximum measurement-to-branch matching and then attempts to build a spanning tree of the network graph using only the branches with measurement assignment. The second method is a direct search for an observable spanning tree. A number of systems are used to test both techniques, including a 34-node water supply network and an IEEE 118-bus power system. The problem of minimisation of distributed leakages is solved efficiently using a state estimation technique. Comparison of the head profile achieved for the calculated optimal valve controls with the standard operating conditions for a 25-node network indicates a major reduction of the volume of leakages. In the final part of this thesis a software package, which simulates the real-time operation of a water distribution system, is described. The programs are designed in such a way that by replacing simulated measurements with live telemetry data they can be directly used for. water network monitoring and control

    Initial conditions, Discreteness and non-linear structure formation in cosmology

    Get PDF
    In this lecture we address three different but related aspects of the initial continuous fluctuation field in standard cosmological models. Firstly we discuss the properties of the so-called Harrison-Zeldovich like spectra. This power spectrum is a fundamental feature of all current standard cosmological models. In a simple classification of all stationary stochastic processes into three categories, we highlight with the name ``super-homogeneous'' the properties of the class to which models like this, with P(0)=0P(0)=0, belong. In statistical physics language they are well described as glass-like. Secondly, the initial continuous density field with such small amplitude correlated Gaussian fluctuations must be discretised in order to set up the initial particle distribution used in gravitational N-body simulations. We discuss the main issues related to the effects of discretisation, particularly concerning the effect of particle induced fluctuations on the statistical properties of the initial conditions and on the dynamical evolution of gravitational clustering.Comment: 28 pages, 1 figure, to appear in Proceedings of 9th Course on Astrofundamental Physics, International School D. Chalonge, Kluwer, eds N.G. Sanchez and Y.M. Pariiski, uses crckapb.st pages, 3 figure, ro appear in Proceedings of 9th Course on Astrofundamental Physics, International School D. Chalonge, Kluwer, Eds. N.G. Sanchez and Y.M. Pariiski, uses crckapb.st

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling
    corecore