3 research outputs found

    Combining application layer and network layer filtering in Pub/Sub systems

    Get PDF
    Content-based Publish/Subscribe systems deliver notifications from publishers to subscribers based on the content of the notifications. Low latency and bandwidth efficiency are two major concerns in these systems. Hybrid Publish/Subscribe systems were developed to take advantage of the new trend of Software Defined Networking (SDN). In these systems, notifications can be filtered on two layers, namely the application-layer and the network-layer. Since each one of these two layers has advantages over the other, it was important to have a selection algorithm to decide on which layer a certain notification has to be filtered. When choosing to filter notifications on the application layer, notifications have to be sent to software filters (servers) located in the topology. The placement of these servers is important for the overall performance of the system. In this thesis, we propose three different placement algorithms, each of which considers a different aspect of the system and tries to place the servers in a way that improves this aspect. The K-Center placement algorithm aims at minimizing the maximum distance between the sending node and the destination server. This will give an upper bound on the worst-case scenario regarding the latency. TheK-Median placement algorithm is designed to minimize the average distance that the sent notifications have to pass to reach the server. The Utilitarian placement algorithm focuses on providing faster service to the packets which are intended for many subscribers, giving better overall service to the majority of the users at the expense of a worse service for some others. We have evaluated the proposed placement algorithms and compared them to each other according to different categories using two different topologies and two different distributions for the published notifications. As expected, the K-Center algorithm proved to be better in minimizing the maximum distance required for a notification to reach a server, while the K-Median and the Utilitarian algorithms showed similar results in most of the cases and were better in minimizing the average distance

    Addressing TCAM limitations in an SDN-based pub/sub system

    Get PDF
    Content-based publish/subscribe is a popular paradigm that enables asynchronous exchange of events between decoupled applications that is practiced in a wide range of domains. Hence, extensive research has been conducted in the area of efficient large-scale pub/sub system. A more recent development are content-based pub/sub systems that utilize software-defined networking (SDN) in order to implement event-filtering in the network layer. By installing content-filters in the ternary content-addressable memory (TCAM) of switches, these systems are able to achieve event filtering and forwarding at line-rate performance. While offering great performance, TCAM is also expensive, power hunger and limited in size. However, current SDN-based pub/sub systems don't address these limitations, thus using TCAM excessively. Therefore, this thesis provides techniques for constraining TCAM usage in such systems. The proposed methods enforce concrete flow limits without dropping any events by selectively merging content-filters into more coarse granular filters. The proposed algorithms leverage information about filter properties, traffic statistics, event distribution and global filter state in order to minimize the increase of unnecessary traffic introduced through merges. The proposed approach is twofold. A local enforcement algorithm ensures that the flow limit of a particular switch is never violated. This local approach is complemented by a periodically executed global optimization algorithm that tries to find a flow configuration on all switches, which minimized to increase in unnecessary traffic, given the current set of advertisements and subscriptions. For both classes, two algorithms with different properties are outlined. The proposed algorithms are integrated into the PLEROMA middleware and evaluated thoroughly in a real SDN testbed as well as in a large-scale network emulation. The evaluations demonstrate the effectiveness of the approaches under diverse and realistic workloads. In some cases, reducing the number of flows by more than 70% while increasing the false positive rate by less than 1% is possible

    Automating Industrial Event Stream Analytics: Methods, Models, and Tools

    Get PDF
    Industrial event streams are an important cornerstone of Industrial Internet of Things (IIoT) applications. For instance, in the manufacturing domain, such streams are typically produced by distributed industrial assets at high frequency on the shop floor. To add business value and extract the full potential of the data (e.g. through predictive quality assessment or maintenance), industrial event stream analytics is an essential building block. One major challenge is the distribution of required technical and domain knowledge across several roles, which makes the realization of analytics projects time-consuming and error-prone. For instance, accessing industrial data sources requires a high level of technical skills due to a large heterogeneity of protocols and formats. To reduce the technical overhead of current approaches, several problems must be addressed. The goal is to enable so-called "citizen technologists" to evaluate event streams through a self-service approach. This requires new methods and models that cover the entire data analytics cycle. In this thesis, the research question is answered, how citizen technologists can be facilitated to independently perform industrial event stream analytics. The first step is to investigate how the technical complexity of modeling and connecting industrial data sources can be reduced. Subsequently, it is analyzed how the event streams can be automatically adapted (directly at the edge), to meet the requirements of data consumers and the infrastructure. Finally, this thesis examines how machine learning models for industrial event streams can be trained in an automated way to evaluate previously integrated data. The main research contributions of this work are: 1. A semantics-based adapter model to describe industrial data sources and to automatically generate adapter instances on edge nodes. 2. An extension for publish-subscribe systems that dynamically reduces event streams while considering requirements of downstream algorithms. 3. A novel AutoML approach to enable citizen data scientists to train and deploy supervised ML models for industrial event streams. The developed approaches are fully implemented in various high-quality software artifacts. These have been integrated into a large open-source project, which enables rapid adoption of the novel concepts into real-world environments. For the evaluation, two user studies to investigate the usability, as well as performance and accuracy tests of the individual components were performed
    corecore