285,015 research outputs found

    StreamLearner: Distributed Incremental Machine Learning on Event Streams: Grand Challenge

    Full text link
    Today, massive amounts of streaming data from smart devices need to be analyzed automatically to realize the Internet of Things. The Complex Event Processing (CEP) paradigm promises low-latency pattern detection on event streams. However, CEP systems need to be extended with Machine Learning (ML) capabilities such as online training and inference in order to be able to detect fuzzy patterns (e.g., outliers) and to improve pattern recognition accuracy during runtime using incremental model training. In this paper, we propose a distributed CEP system denoted as StreamLearner for ML-enabled complex event detection. The proposed programming model and data-parallel system architecture enable a wide range of real-world applications and allow for dynamically scaling up and out system resources for low-latency, high-throughput event processing. We show that the DEBS Grand Challenge 2017 case study (i.e., anomaly detection in smart factories) integrates seamlessly into the StreamLearner API. Our experiments verify scalability and high event throughput of StreamLearner.Comment: Christian Mayer, Ruben Mayer, and Majd Abdo. 2017. StreamLearner: Distributed Incremental Machine Learning on Event Streams: Grand Challenge. In Proceedings of the 11th ACM International Conference on Distributed and Event-based Systems (DEBS '17), 298-30

    Event Recognition Using Signal Spectrograms in Long Pulse Experiments

    Get PDF
    As discharge duration increases, real-time complex analysis of the signal becomes more important. In this context, data acquisition and processing systems must provide models for designing experiments which use event oriented plasma control. One example of advanced data analysis is signal classification. The off-line statistical analysis of a large number of discharges provides information to develop algorithms for the determination of the plasma parameters from measurements of magnetohydrodinamic waves, for example, to detect density fluctuations induced by the Alfvén cascades using morphological patterns. The need to apply different algorithms to the signals and to address different processing algorithms using the previous results necessitates the use of an event-based experiment. The Intelligent Test and Measurement System platform is an example of architecture designed to implement distributed data acquisition and real-time processing systems. The processing algorithm sequence is modeled using an event-based paradigm. The adaptive capacity of this model is based on the logic defined by the use of state machines in SCXML. The Intelligent Test and Measurement System platform mixes a local multiprocessing model with a distributed deployment of services based on Jini

    Distributed Network Anomaly Detection on an Event Processing Framework

    Get PDF
    Network Intrusion Detection Systems (NIDS) are an integral part of modern data centres to ensure high availability and compliance with Service Level Agreements (SLAs). Currently, NIDS are deployed on high-performance, high-cost middleboxes that are responsible for monitoring a limited section of the network. The fast increasing size and aggregate throughput of modern data centre networks have come to challenge the current approach to anomaly detection to satisfy the fast growing compute demand. In this paper, we propose a novel approach to distributed intrusion detection systems based on the architecture of recently proposed event processing frameworks. We have designed and implemented a prototype system using Apache Storm to show the benefits of the proposed approach as well as the architectural differences with traditional systems. Our system distributes modules across the available devices within the network fabric and uses a centralised controller for orchestration, management and correlation. Following the Software Defined Networking (SDN) paradigm, the controller maintains a complete view of the network but distributes the processing logic for quick event processing while performing complex event correlation centrally. We have evaluated the proposed system using publicly available data centre traces and demonstrated that the system can scale with the network topology while providing high performance and minimal impact on packet latency

    Adaptive Energy-aware Scheduling of Dynamic Event Analytics across Edge and Cloud Resources

    Full text link
    The growing deployment of sensors as part of Internet of Things (IoT) is generating thousands of event streams. Complex Event Processing (CEP) queries offer a useful paradigm for rapid decision-making over such data sources. While often centralized in the Cloud, the deployment of capable edge devices on the field motivates the need for cooperative event analytics that span Edge and Cloud computing. Here, we identify a novel problem of query placement on edge and Cloud resources for dynamically arriving and departing analytic dataflows. We define this as an optimization problem to minimize the total makespan for all event analytics, while meeting energy and compute constraints of the resources. We propose 4 adaptive heuristics and 3 rebalancing strategies for such dynamic dataflows, and validate them using detailed simulations for 100 - 1000 edge devices and VMs. The results show that our heuristics offer O(seconds) planning time, give a valid and high quality solution in all cases, and reduce the number of query migrations. Furthermore, rebalance strategies when applied in these heuristics have significantly reduced the makespan by around 20 - 25%.Comment: 11 pages, 7 figure

    Towards a Smarter organization for a Self-servicing Society

    Full text link
    Traditional social organizations such as those for the management of healthcare are the result of designs that matched well with an operational context considerably different from the one we are experiencing today. The new context reveals all the fragility of our societies. In this paper, a platform is introduced by combining social-oriented communities and complex-event processing concepts: SELFSERV. Its aim is to complement the "old recipes" with smarter forms of social organization based on the self-service paradigm and by exploring culture-specific aspects and technological challenges.Comment: Final version of a paper published in the Proceedings of International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion (DSAI'16), special track on Emergent Technologies for Ambient Assisted Living (ETAAL

    Reducing the Gap Between Business and Information Systems Through Complex Event Processing

    Get PDF
    According to the Object Management Group, a rule is a proposition that is a claim of obligation or of necessity. The concept of rule is usually employed in the context of business process to manage companies operations. While a workflow is an explicit specification of tasks' execution flow, business rules only impose restrictions on the tasks' execution. This provides a great deal of flexibility for the process execution, since the stakeholders are free to choose an execution flow which does not violate the rules. The execution of a task in a process can be seen as the occurrence of an event, which may enable/disable the execution of some other tasks in the process. Event-driven programming is a paradigm in which the program control-flow is determined by the occurrence of events. The capacity to handle processes that are unpredictably non-linear and dynamic makes the event-driven paradigm an effective solution for the implementation of business rules. However, the connection between the business rules and their implementation through event-driven programming has been made in an ad-hoc and unstructured manner. This paper proposes a methodology to tackle such a problem by systematically moving from business rules described in natural language toward a concrete implementation of a business process. We use complex event processing (CEP) to implement the process. CEP relies on the event driven paradigm for monitoring and processing events. The methodology allows for the active participation of business people at all stages of the refinement process. Throughout the paper, we show how our methodology was employed to implement the operations of the World Bank

    CEP-DTHP : A Complex Event Processing using the Dual-Tier Hybrid Paradigm Over the Stream Mining Process

    Get PDF
    CEP is a widely used technique for the reliability and recognition of arbitrarily complex patterns in enormous data streams with great performance in real time. Real-time detection of crucial events and rapid response to them are the key goals of sophisticated event processing.  The performance of event processing systems can be improved by parallelizing CEP evaluation procedures. Utilizing CEP in parallel while deploying a multi-core or distributed environment is one of the most popular and widely recognized tackles to accomplish the goal. This paper demonstrates the ability to use an unusual parallelization strategy to effectively process complicated events over streams of data. This method depends on a dual-tier hybrid paradigm that combines several parallelism levels. Thread-level or task-level parallelism (TLP) and Data-level parallelism (DLP) were combined in this research. Many threads or instruction sequences from a comparable application can run concurrently under the TLP paradigm. In the DLP paradigm, instruc-tions from a single stream operate on several data streams at the same time. In our suggested model, there are four major stages: data mining, pre-processing, load shedding, and optimization. The first phase is online data mining, following which the data is materialized into a publicly available solution that combines a CEP engine with a library. Next, data pre-processing encompasses the efficient adaptation of the content or format of raw data from many, perhaps diverse sources. Finally, parallelization approaches have been created to reduce CEP processing time. By providing this two-type parallelism, our proposed solution combines the benefits of DLP and TLP while addressing their constraints. The JAVA tool will be used to assess the suggested technique. The performance of the suggested technique is compared to that of other current ways for determining the efficacy and efficiency of the proposed algorithm

    Predictive intelligence to the edge through approximate collaborative context reasoning

    Get PDF
    We focus on Internet of Things (IoT) environments where a network of sensing and computing devices are responsible to locally process contextual data, reason and collaboratively infer the appearance of a specific phenomenon (event). Pushing processing and knowledge inference to the edge of the IoT network allows the complexity of the event reasoning process to be distributed into many manageable pieces and to be physically located at the source of the contextual information. This enables a huge amount of rich data streams to be processed in real time that would be prohibitively complex and costly to deliver on a traditional centralized Cloud system. We propose a lightweight, energy-efficient, distributed, adaptive, multiple-context perspective event reasoning model under uncertainty on each IoT device (sensor/actuator). Each device senses and processes context data and infers events based on different local context perspectives: (i) expert knowledge on event representation, (ii) outliers inference, and (iii) deviation from locally predicted context. Such novel approximate reasoning paradigm is achieved through a contextualized, collaborative belief-driven clustering process, where clusters of devices are formed according to their belief on the presence of events. Our distributed and federated intelligence model efficiently identifies any localized abnormality on the contextual data in light of event reasoning through aggregating local degrees of belief, updates, and adjusts its knowledge to contextual data outliers and novelty detection. We provide comprehensive experimental and comparison assessment of our model over real contextual data with other localized and centralized event detection models and show the benefits stemmed from its adoption by achieving up to three orders of magnitude less energy consumption and high quality of inference
    • …
    corecore