14 research outputs found

    Complex Event Processing (CEP)

    Get PDF
    Event-driven information systems demand a systematic and automatic processing of events. Complex Event Processing (CEP) encompasses methods, techniques, and tools for processing events while they occur, i.e., in a continuous and timely fashion. CEP derives valuable higher-level knowledge from lower-level events; this knowledge takes the form of so called complex events, that is, situations that can only be recognized as a combination of several events. 1 Application Areas Service Oriented Architecture (SOA), Event-Driven Architecture (EDA), cost-reductions in sensor technology and the monitoring of IT systems due to legal, contractual, or operational concerns have lead to a significantly increased generation of events in computer systems in recent years. This development is accompanied by a demand to manage and process these events in an automatic, systematic, and timely fashion. Important application areas for Complex Event Processing (CEP) are the following

    Aktuelles Schlagwort "Complex Event Processing (CEP)"

    Get PDF

    Fuzzy Dynamic Discrimination Algorithms for Distributed Knowledge Management Systems

    Get PDF
    A reduction of the algorithmic complexity of the fuzzy inference engine has the following property: the inputs (the fuzzy rules and the fuzzy facts) can be divided in two parts, one being relatively constant for a long a time (the fuzzy rule or the knowledge model) when it is compared to the second part (the fuzzy facts) for every inference cycle. The occurrence of certain transformations over the constant part makes sense, in order to decrease the solution procurement time, in the case that the second part varies, but it is known at certain moments in time. The transformations attained in advance are called pre-processing or knowledge compilation. The use of variables in a Business Rule Management System knowledge representation allows factorising knowledge, like in classical knowledge based systems. The language of the first-degree predicates facilitates the formulation of complex knowledge in a rigorous way, imposing appropriate reasoning techniques. It is, thus, necessary to define the description method of fuzzy knowledge, to justify the knowledge exploiting efficiency when the compiling technique is used, to present the inference engine and highlight the functional features of the pattern matching and the state space processes. This paper presents the main results of our project PR356 for designing a compiler for fuzzy knowledge, like Rete compiler, that comprises two main components: a static fuzzy discrimination structure (Fuzzy Unification Tree) and the Fuzzy Variables Linking Network. There are also presented the features of the elementary pattern matching process that is based on the compiled structure of fuzzy knowledge. We developed fuzzy discrimination algorithms for Distributed Knowledge Management Systems (DKMSs). The implementations have been elaborated in a prototype system FRCOM (Fuzzy Rule COMpiler).Fuzzy Unification Tree, Dynamic Discrimination of Fuzzy Sets, DKMS, FRCOM

    Evidence of Log Integrity in Policy-based Security Monitoring

    Get PDF
    Abstract-Monitoring systems are commonly used by many organizations to collect information about their system and network operations. Typically, SNMP, IDS, or software agents generate log data and store them in a centralized monitoring system for analysis. However, malicious employees, attackers, or even organizations themselves can modify such data to hide malicious activities or to avoid expensive non-compliance fines. This paper proposes a cloud-based framework for verifying the trustworthiness of the logs based on a small amount of evidence data. A simple Cloud Security Monitoring (CSM) API, made available on the cloud services, allows organizations operating on the cloud to collect additional "evidence" about their systems. Such evidence is used to verify system compliance against the policies set by security managers or regulatory authorities. We present a strategy for randomly auditing and verifying resource compliance, and propose an architecture that allows the organizations to prove compliance to an external auditing agency

    A Survey on IT-Techniques for a Dynamic Emergency Management in Large Infrastructures

    Get PDF
    This deliverable is a survey on the IT techniques that are relevant to the three use cases of the project EMILI. It describes the state-of-the-art in four complementary IT areas: Data cleansing, supervisory control and data acquisition, wireless sensor networks and complex event processing. Even though the deliverable’s authors have tried to avoid a too technical language and have tried to explain every concept referred to, the deliverable might seem rather technical to readers so far little familiar with the techniques it describes

    Enhancing performance and expressibility of complex event processing using binary tree-based directed graph

    Get PDF
    In various domains, applications are required to detect and react to complex situations accordingly. In response to the demand for matching receiving events to complex patterns, several event processing systems have been developed. However, there are just a few of them considered both performance and expressibility of event matching as focusing only on performance can cause negative effect on the expressibility or vice versa. This research develops a fast adaptive event matching system (FAEM), a new event matching system to improve expressibility and performance measures (throughput and end-to-end latency). This system is designed and developed based on a novel binary tree-based directed graph (BTDG) as a unified basis for event-matching. The proposed system transforms a user-defined query into a set of system objects including buffers, conditions on buffers, cursors, and join operators (non-kleene and kleene operators) and arranges these objects on a BTDG. Provided BTDG the enhancement in performance of non-kleene operators applied through developing a batch removal method to remove the events that are located out of time-window, and an actual time window (ATW) which can improve performance of event matching. To improve performance of kleene operators, this research introduces a twin algorithms for kleene operator which is match to BTDG. These two kleene algorithms apply grouping on events and reduce the number of intermediate results and apply combination algorithm in final stage. Transformation of queries containing join operators into BTDG enhances the expressibility of the proposed CEP system

    Mobile Spatial Subscriptions for Location-Aware Services

    Get PDF
    Spatial subscriptions have been used to specify locations of interest in Distributed Event-based Systems (DEBSs). However, current DEBSs representations to support spatial subscriptions are not expressive enough to describe some forms of subscriptions in mobile settings. For instance, users are not allowed to specify a spatial subscription that refers to other more well-known locations, in case they are not familiar with the names of their current locations. In addition, the middleware in existing DEBSs does not support changes at runtime, and modification to these middleware systems to support spatial subscriptions are highly coupled with specific DEBS infrastructures. In this thesis, I argue that by enhancing the expressiveness of spatial subscriptions, a new model of mobile spatial subscriptions for location-aware services can be defined and a reusable plug-in implementation approach that supports existing DEBSs can be developed. This thesis first summarizes the essential abstractions to specify mobile spatial subscriptions, and analyze the expressiveness of existing DEBSs to support these abstractions. Second, it proposes a three-level mobile spatial subscription model, which supports the essential abstractions used to specify spatial subscriptions. The first level of the model handles subscriptions consisting of geometric coordinates; the second level supports subscriptions with location labels; the third level interprets subscriptions which specify locations by stating their dynamic properties. Next, a plug-in implementation approach is introduced, and hence, the three-level model can be integrated with different DEBSs with minimal modification to the middleware. The subscription model is implemented as a subscriber/publisher component, instead of directly modifying the existing DEBS. Finally, I develop a prototype system, Dynamic Mobile Subscription System (DMSS), and illustrate the usefulness and applicability of the three-level model and the plug-in implementation approach

    Design and implementation of a multi-agent opportunistic grid computing platform

    Get PDF
    Opportunistic Grid Computing involves joining idle computing resources in enterprises into a converged high performance commodity infrastructure. The research described in this dissertation investigates the viability of public resource computing in offering a plethora of possibilities through seamless access to shared compute and storage resources. The research proposes and conceptualizes the Multi-Agent Opportunistic Grid (MAOG) solution in an Information and Communication Technologies for Development (ICT4D) initiative to address some limitations prevalent in traditional distributed system implementations. Proof-of-concept software components based on JADE (Java Agent Development Framework) validated Multi-Agent Systems (MAS) as an important tool for provisioning of Opportunistic Grid Computing platforms. Exploration of agent technologies within the research context identified two key components which improve access to extended computer capabilities. The first component is a Mobile Agent (MA) compute component in which a group of agents interact to pool shared processor cycles. The compute component integrates dynamic resource identification and allocation strategies by incorporating the Contract Net Protocol (CNP) and rule based reasoning concepts. The second service is a MAS based storage component realized through disk mirroring and Google file-system’s chunking with atomic append storage techniques. This research provides a candidate Opportunistic Grid Computing platform design and implementation through the use of MAS. Experiments conducted validated the design and implementation of the compute and storage services. From results, support for processing user applications; resource identification and allocation; and rule based reasoning validated the MA compute component. A MAS based file-system that implements chunking optimizations was considered to be optimum based on evaluations. The findings from the undertaken experiments also validated the functional adequacy of the implementation, and show the suitability of MAS for provisioning of robust, autonomous, and intelligent platforms. The context of this research, ICT4D, provides a solution to optimizing and increasing the utilization of computing resources that are usually idle in these contexts

    Relative temporal constraints in the Rete algorithm for complex event detection

    No full text
    Complex Event Processing is an important technology for information systems with a broad application space rang-ing from supply chain management, systems monitoring, and stock market analysis to news services. Its purpose is the identication of event patterns with logical, temporal or causal relationships within multiple occurring events. The Rete algorithm is commonly used in rule-based systems to trigger certain actions if a corresponding rule holds. Its good performance for a high number of rules in the rulebase makes it ideally suited for complex event detection. How-ever, the traditional Rete algorithm is limited to operations such as unication and the extraction of predicates from a knowledge base. There is no support for temporal operators. We propose an extension of the Rete algorithm to support the detection of relative temporal constraints. Further, we propose an ecient means to perform the garbage collec-tion in the Rete algorithm in order to discard events after they can no longer fulll their temporal constraints. Fi-nally, we present an extension of Allen's thirteen operators for time-intervals with quantitative constraints to deal with too restrictive or too permissive operators by introducing tolerance limits or restrictive conditions for them
    corecore