490 research outputs found

    Towards More Data-Aware Application Integration (extended version)

    Full text link
    Although most business application data is stored in relational databases, programming languages and wire formats in integration middleware systems are not table-centric. Due to costly format conversions, data-shipments and faster computation, the trend is to "push-down" the integration operations closer to the storage representation. We address the alternative case of defining declarative, table-centric integration semantics within standard integration systems. For that, we replace the current operator implementations for the well-known Enterprise Integration Patterns by equivalent "in-memory" table processing, and show a practical realization in a conventional integration system for a non-reliable, "data-intensive" messaging example. The results of the runtime analysis show that table-centric processing is promising already in standard, "single-record" message routing and transformations, and can potentially excel the message throughput for "multi-record" table messages.Comment: 18 Pages, extended version of the contribution to British International Conference on Databases (BICOD), 2015, Edinburgh, Scotlan

    Integration of Event Processing with Service-oriented Architectures and Business Processes

    Get PDF
    Data sources like the Internet of Things or Cyber-physical Systems provide enormous amounts of real-time information in form of streams of events. The use of such event streams enables reactive software components as building blocks in a new generation of systems. Businesses, for example, can benefit from the integration of event streams; new services can be provided to customers, or existing business processes can be improved. The development of reactive systems and the integration with existing application landscapes, however, is challenging. While traditional system components follow a pull-based request/reply interaction style, event-based systems follow a push-based interaction scheme; events arrive continuously and application logic is triggered implicitly. To benefit from push-based and pull-based interactions together, an intuitive software abstraction is necessary to integrate push-based application logic with existing systems. In this work we introduce such an abstraction: we present Event Stream Processing Units (SPUs) - a container model for the encapsulation of event-processing application logic at the technical layer as well as at the business process layer. At the technical layer SPUs provide a service-like abstraction and simplify the development of scalable reactive applications. At the business process layer SPUs make event processing explicitly representable. SPUs have a managed lifecycle and are instantiated implicitly - upon arrival of appropriate events - or explicitly upon request. At the business process layer SPUs encapsulate application logic for event stream processing and enable a seamless transition between process models, executable process representations, and components at the IT layer. Throughout this work, we focus on different aspects of the SPU container model: we first introduce the SPU container model and its execution semantics. Since SPUs rely on a publish/subscribe system for event dissemination, we discuss quality of service requirements in the context of event processing. SPUs rely on input in form of events; in event-based systems, however, event production is logically decoupled, i.e., event producers are not aware of the event consumers. This influences the system development process and requires an appropriate methodology. Fur this purpose we present a requirements engineering approach that takes the specifics of event-based applications into account. The integration of events with business processes leads to new business opportunities. SPUs can encapsulate event processing at the abstraction level of business functions and enable a seamless integration with business processes. For this integration, we introduce extensions to the business process modeling notations BPMN and EPCs to model SPUs. We also present a model-to-execute workflow for SPU-containing process models and implementation with business process modeling software. The SPU container model itself is language-agnostic; thus, we present Eventlets as SPU implementation based on Java Enterprise technology. Eventlets are executed inside a distributed middleware and follow a lifecycle. They reduce the development effort of scalable event processing applications as we show in our evaluation. Since the SPU container model introduces an additional layer of abstraction we analyze the overhead in terms of performance and show that Eventlets can compete with traditional event processing approaches in terms of performance. SPUs can be used to process sensitive data, e.g., in health care environments. Thus, privacy protection is an important requirement for certain use cases and we sketch the application of a privacy-preserving event dissemination scheme to protect event consumers and producers from curious brokers. We also quantify the resulting overhead introduced by a privacy-preserving brokering scheme in an evaluation

    Distributing Real Time Data From a Multi-Node Large Scale Contact Center Using Corba

    Get PDF
    This thesis researches and evaluates the current technologies available for developing a system for propagation of Real-Time Data from a large scale Enterprise Server to large numbers of registered clients on the network. The large scale Enterprise Server being implemented is a Contact Centre Server, which can be a standalone system or part of a multi-nodal system. This paper makes three contributions to the study of scalable real-time notification services. Firstly, it defines the research of the different technologies and their implementation for distributed objects in today\u27s world of computing. Secondly, the paper explains how we have addressed key design challenges faced when implementing a Notification Service for TAO, which is our CORBA-compliant real-time Object Request Broker (ORB). The paper shows how to integrate and configure CORBA features to provide real-time event communication. Finally, the paper analyzes the results of the implementation and how it compares to existing technologies being used for the propagation of Real-Time Data

    Towards an internet-scale stream processing service with loosely-coupled entities

    Get PDF
    Master'sMASTER OF SCIENC

    Methods and Tools for Management of Distributed Event Processing Applications

    Get PDF
    Die Erfassung und Verarbeitung von Ereignissen aus cyber-physischen Systemen bietet Anwendern die Möglichkeit, kontinuierlich über Leistungsdaten und aufkommende Probleme unterrichtet zu werden (Situational Awareness) oder Wartungsprozesse zustandsabhängig zu optimieren (Condition-based Maintenance). Derartige Szenarien verlangen aufgrund der Vielzahl und Frequenz der Daten sowie der Anforderung einer echtzeitnahen Auswertung den Einsatz geeigneter Technologien. Unter dem Namen Event Processing haben sich dabei Technologien etabliert, die in der Lage sind, Datenströme in Echtzeit zu verarbeiten und komplexe Ereignismuster auf Basis räumlicher, zeitlicher oder kausaler Zusammenhänge zu erkennen. Gleichzeitig sind heute in diesem Bereich verfügbare Systeme jedoch noch durch eine hohe technische Komplexität der zugrunde liegenden deklarativen Sprachen gekennzeichnet, die bei der Entwicklung echtzeitfähiger Anwendungen zu langsamen Entwicklungszyklen aufgrund notwendiger technischer Expertise führt. Gerade diese Anwendungen weisen allerdings häufig eine hohe Dynamik in Bezug auf Veränderungen von Anforderungen der zu erkennenden Situationen, aber auch der zugrunde liegenden Sensordaten hinsichtlich ihrer Syntax und Semantik auf. Der primäre Beitrag dieser Arbeit ermöglicht Fachanwendern durch die Abstraktion von technischen Details, selbständig verteilte echtzeitfähige Anwendungen in Form von sogenannten Echtzeit-Verarbeitungspipelines zu erstellen, zu bearbeiten und auszuführen. Die Beiträge der Arbeit lassen sich wie folgt zusammenfassen: 1. Eine Methodik zur Entwicklung echtzeitfähiger Anwendungen unter Berücksichtigung von Erweiterbarkeit sowie der Zugänglichkeit für Fachanwender. 2. Modelle zur semantischen Beschreibung der Charakteristika von Ereignisproduzenten, Ereignisverarbeitungseinheiten und Ereigniskonsumenten. 3. Ein System zur Ausführung von Verarbeitungspipelines bestehend aus geographisch verteilten Ereignisverarbeitungseinheiten. 4. Ein Software-Artefakt zur graphischen Modellierung von Verarbeitungspipelines sowie deren automatisierter Ausführung. Die Beiträge werden in verschiedenen Szenarien aus den Bereichen Produktion und Logistik vorgestellt, angewendet und evaluiert

    A decentralized framework for cross administrative domain data sharing

    Get PDF
    Federation of messaging and storage platforms located in remote datacenters is an essential functionality to share data among geographically distributed platforms. When systems are administered by the same owner data replication reduces data access latency bringing data closer to applications and enables fault tolerance to face disaster recovery of an entire location. When storage platforms are administered by different owners data replication across different administrative domains is essential for enterprise application data integration. Contents and services managed by different software platforms need to be integrated to provide richer contents and services. Clients may need to share subsets of data in order to enable collaborative analysis and service integration. Platforms usually include proprietary federation functionalities and specific APIs to let external software and platforms access their internal data. These different techniques may not be applicable to all environments and networks due to security and technological restrictions. Moreover the federation of dispersed nodes under a decentralized administration scheme is still a research issue. This thesis is a contribution along this research direction as it introduces and describes a framework, called \u201cWideGroups\u201d, directed towards the creation and the management of an automatic federation and integration of widely dispersed platform nodes. It is based on groups to exchange messages among distributed applications located in different remote datacenters. Groups are created and managed using client side programmatic configuration without touching servers. WideGroups enables the extension of the software platform services to nodes belonging to different administrative domains in a wide area network environment. It lets different nodes form ad-hoc overlay networks on-the-fly depending on message destinations located in distinct administrative domains. It supports multiple dynamic overlay networks based on message groups, dynamic discovery of nodes and automatic setup of overlay networks among nodes with no server-side configuration. I designed and implemented platform connectors to integrate the framework as the federation module of Message Oriented Middleware and Key Value Store platforms, which are among the most widespread paradigms supporting data sharing in distributed systems

    Adaptive UxV Routing Based on Network Performance

    Get PDF
    Μια μεγάλη και απότομη εξέλιξη παρατηρείται σήμερα στον τομέα της ρομποτικής και του διαδικτύου των πραγμάτων. Οι κόμβοι που αποτελούν την κύρια υποδομή του διαδικτύου των πραγμάτων έχουν εμπλουτιστεί με σημαντικές και πολυποίκιλες δυνατότητες. Η πιο σημαντική από αυτες τις δυνατότηες είναι η κινητικότητα, η οποία έχει προσφερθεί λόγω της επίσης σημαντικής εξέλιξης του τομέα που αφορά τα μη επανδρωμένα οχήματα. Ένα μη επανδρωμένο όχημα μπορεί να εξυπηρετήσει έναν ερευνητή ως κινητός αισθητήρας (θερμοκρασίας, πίεσης νερού) και να τοποθετηθεί σε οποιαδήποτε δυνατή τοποθεσία. Κάποια ακόμα χαρακτηριστικά που κάνουν δελεαστική την επιλογή μη επανδρωμένων οχημάτων ως κόμβους του διαδικτύου των πραγμάτων είναι η ικανότητα της λήψης αποφάσεων χωρίς την ανθρώπινη παρέμβαση, η αντοχή, η επαναπρογραμματισιμότητα καθώς και η δυνατότητα της ζωντανής ροής πολυμέσων. Με βάση αυτά τα χαρακτηριστικά τα μη επανδρωμένα οχήματα μπορούν να χρησιμοποιηθούν επίσης σε περιπτώσεις εποπτείας χώρων και συνόρων, παρακολούθηση καμερών ασφαλείας καθώς και για υποστήριξη σε περιπτώσεις διαχείρισης κρίσεων. Για παράδειγμα ένα μη επανδρωμένο όχημα ξηράς όπου φέρει μία υψηλής ευκρίνειας κάμερα, σε συνδυασμό με έναν αλγόριθμο αναγνώρισης αντικειμένων μπορεί να χρησιμοποιηθεί για επόπτεια συνόρων. Σε αυτήν την διπλωματική εργασία προτείνεται ένα πλαίσιο, στο οποίο υλοποιείται μια διαδικασία λήψης αποφάσεων με βάση την ποιότητα του δικτύου. Το πλαίσιο αυτό προσαρμόζει την ροή της πληροφορίας μεταξύ του επανδρωμένου οχήματος και του σταθμού ελέγχου, βασισμένο σε μετρικές ποιότητας του δικτύου (όπως το ρυθμό απώλειας πακέτων) και στις αρχές της Θεωρίας Βέλτιστης Παύσης, με σκοπό να εξασφαλίσει το βέλτιστο ποσοστό παραλαβής πληροφοριών υψίστης σημασίας από το μη επανδρωμένο όχημα προς το σταθμό ελέγχου και το αντίστροφο. Όταν το δίκτυο συμπεριφέρεται άριστα δεν υπάρχει περιορισμός στην ροή πληροφοριών, αλλα έαν το δίκτυο είναι είτε υπερφορτωμένο, είτε κορεσμένο, τότε εφαρμόζονται περιοριστικοί κανόνες. Το προτεινόμενο μοντέλο, εισάγει δύο μηχανισμούς βέλτιστης παύσης βασισμένος στην Θεωρίας Βέλτιστης Παύσης, στη Θεωρία Ανίχνευσης Αλλαγής Κατεύθυνσης καθώς και σε μία διαδικασία εκπτωτικής ανταμοιβής. Για την υποστήριξη του υλοποιημένου πλαισίου, έγινε μία σειρά πειραμάτων με πολύ υποσχόμενα αποτελέσματα. Σαν κινητός κόμβος χρησιμοποιήθηκε ένα ρομπότ TurtleBot, μαζί με ένα XBOX Kinect που έφερε μία έγχρωμη κάμερα και έναν αισθητήρα βάθους καθώς και με ένα Raspberry Pi, το οποίο εκτελούσε το Robotic Operating System (ROS) και το σύστημα Apache Kafka, με σκοπό να γεφυρώσει το χάσμα επικοινωνίας μεταξύ TurtleBot και σταθμού ελέγχου.Robotics and Internet of Things (IoT) have been experiencing rapid growth nowadays. IoT nodes are significantly enhanced with many different features. One of the most important is the mobility capabilities, given by the noticeably huge growth of UxV (UxVs- x stands for a different type of environment, i.e. ‘s’ stands for sea, ‘a’ for air and ‘g’ for ground) area. The idea is the assumption of a drone as a mobile sensor, that can be deployed wherever the experimenter wants. Some more characteristics that make the unmanned vehicles a very tempting decision as IoT nodes are the decision-making ability without human interaction, endurance, re-programmability and capability of multimedia streaming. These characteristics make drones an option for use cases of surveillance, security monitoring, and supporting crisis management activities. For instance, a UGV equipped with a high-definition camera and running an algorithm of object recognition can serve the purpose of border surveillance. In this thesis, a framework that implements a network quality based decision-making process is developed. This framework adapts the information flow between the UxV and the Ground Control Station (GCS) based on network quality metrics (such as packet error rate etc.) and the principals Optimal Stopping Theory (OST). The goal of this framework is to ensure the optimal delivery of critical information from UxV to GCS and vice-versa. If the network behaves optimally then there is no limitation on the information flow, but if the network is saturated or overloaded restriction rules are applied. The proposed model introduces two optimal stopping time mechanisms based on change detection theory and a discounted reward process. To support the implemented framework, an experimental environment has been set up and also a series of experiments with very promising results. As a mobile IoT node, a TurtleBot has been used, along with an XBOX Kinect sensor (RGB camera and depth sensor) and a Raspberry Pi running Robotic Operating System (ROS) and Apache Kafka pub-sub system with ultimate purpose the communication between the TurtleBot and the GCS

    The 11th Conference of PhD Students in Computer Science

    Get PDF
    corecore