258 research outputs found

    Distributing Real Time Data From a Multi-Node Large Scale Contact Center Using Corba

    Get PDF
    This thesis researches and evaluates the current technologies available for developing a system for propagation of Real-Time Data from a large scale Enterprise Server to large numbers of registered clients on the network. The large scale Enterprise Server being implemented is a Contact Centre Server, which can be a standalone system or part of a multi-nodal system. This paper makes three contributions to the study of scalable real-time notification services. Firstly, it defines the research of the different technologies and their implementation for distributed objects in today\u27s world of computing. Secondly, the paper explains how we have addressed key design challenges faced when implementing a Notification Service for TAO, which is our CORBA-compliant real-time Object Request Broker (ORB). The paper shows how to integrate and configure CORBA features to provide real-time event communication. Finally, the paper analyzes the results of the implementation and how it compares to existing technologies being used for the propagation of Real-Time Data

    A Pervasive Computational Intelligence based Cognitive Security Co-design Framework for Hype-connected Embedded Industrial IoT

    Get PDF
    The amplified connectivity of routine IoT entities can expose various security trajectories for cybercriminals to execute malevolent attacks. These dangers are even amplified by the source limitations and heterogeneity of low-budget IoT/IIoT nodes, which create existing multitude-centered and fixed perimeter-oriented security tools inappropriate for vibrant IoT settings. The offered emulation assessment exemplifies the remunerations of implementing context aware co-design oriented cognitive security method in assimilated IIoT settings and delivers exciting understandings in the strategy execution to drive forthcoming study. The innovative features of our system is in its capability to get by with irregular system connectivity as well as node limitations in terms of scares computational ability, limited buffer (at edge node), and finite energy. Based on real-time analytical data, projected scheme select the paramount probable end-to-end security system possibility that ties with an agreed set of node constraints. The paper achieves its goals by recognizing some gaps in the security explicit to node subclass that is vital to our system’s operations

    A decentralized framework for cross administrative domain data sharing

    Get PDF
    Federation of messaging and storage platforms located in remote datacenters is an essential functionality to share data among geographically distributed platforms. When systems are administered by the same owner data replication reduces data access latency bringing data closer to applications and enables fault tolerance to face disaster recovery of an entire location. When storage platforms are administered by different owners data replication across different administrative domains is essential for enterprise application data integration. Contents and services managed by different software platforms need to be integrated to provide richer contents and services. Clients may need to share subsets of data in order to enable collaborative analysis and service integration. Platforms usually include proprietary federation functionalities and specific APIs to let external software and platforms access their internal data. These different techniques may not be applicable to all environments and networks due to security and technological restrictions. Moreover the federation of dispersed nodes under a decentralized administration scheme is still a research issue. This thesis is a contribution along this research direction as it introduces and describes a framework, called \u201cWideGroups\u201d, directed towards the creation and the management of an automatic federation and integration of widely dispersed platform nodes. It is based on groups to exchange messages among distributed applications located in different remote datacenters. Groups are created and managed using client side programmatic configuration without touching servers. WideGroups enables the extension of the software platform services to nodes belonging to different administrative domains in a wide area network environment. It lets different nodes form ad-hoc overlay networks on-the-fly depending on message destinations located in distinct administrative domains. It supports multiple dynamic overlay networks based on message groups, dynamic discovery of nodes and automatic setup of overlay networks among nodes with no server-side configuration. I designed and implemented platform connectors to integrate the framework as the federation module of Message Oriented Middleware and Key Value Store platforms, which are among the most widespread paradigms supporting data sharing in distributed systems

    The Data Acquisition in Smart Substation of China

    Get PDF

    Modelling a Distributed Data Acquisition System

    Get PDF
    This thesis discusses the formal modelling and verification of certain non-real-time aspects of correctness of a mission-critical distributed software system known as the ALICE Data Point Service (ADAPOS). The domain of this distributed system is data acquisition from a particle detector control system in experimental high energy particle physics research. ADAPOS is part of the upgrade effort of A Large Ion Collider Experiment (ALICE) at the European Organisation for Nuclear Research (CERN), near Geneva in France/Switzerland, for the third run of the Large Hadron Collider (LHC). ADAPOS is based on the publicly available ALICE Data Point Processing (ADAPRO) C++14 framework and works within the free and open source GNU/Linux ecosystem. The model checker Spin was chosen for modelling and verifying ADAPOS. The model focuses on the general specification of ADAPOS. It includes ADAPOS processes, a load generator process, and rudimentary interpretations for the network protocols used between the processes. For experimenting with different interpretations of the underlying network protocols and also for coping with the state space explosion problem, eight variants of the model were developed and studied. Nine Linear Temporal Logic (LTL) properties were defined for all those variants. Large numbers of states were covered during model checking even though the model turned out to have a reachable state space too large to fully exhaust. No counter-examples were found to safety properties. A significant amount of evidence hinting that ADAPOS seems to be safe, was obtained. Liveness properties and implementation-level verification among other possible research directions remain open

    A Hierarchical Filtering-Based Monitoring Architecture for Large-scale Distributed Systems

    Get PDF
    On-line monitoring is essential for observing and improving the reliability and performance of large-scale distributed (LSD) systems. In an LSD environment, large numbers of events are generated by system components during their execution and interaction with external objects (e.g. users or processes). These events must be monitored to accurately determine the run-time behavior of an LSD system and to obtain status information that is required for debugging and steering applications. However, the manner in which events are generated in an LSD system is complex and represents a number of challenges for an on-line monitoring system. Correlated events axe generated concurrently and can occur at multiple locations distributed throughout the environment. This makes monitoring an intricate task and complicates the management decision process. Furthermore, the large number of entities and the geographical distribution inherent with LSD systems increases the difficulty of addressing traditional issues, such as performance bottlenecks, scalability, and application perturbation. This dissertation proposes a scalable, high-performance, dynamic, flexible and non-intrusive monitoring architecture for LSD systems. The resulting architecture detects and classifies interesting primitive and composite events and performs either a corrective or steering action. When appropriate, information is disseminated to management applications, such as reactive control and debugging tools. The monitoring architecture employs a novel hierarchical event filtering approach that distributes the monitoring load and limits event propagation. This significantly improves scalability and performance while minimizing the monitoring intrusiveness. The architecture provides dynamic monitoring capabilities through: subscription policies that enable applications developers to add, delete and modify monitoring demands on-the-fly, an adaptable configuration that accommodates environmental changes, and a programmable environment that facilitates development of self-directed monitoring tasks. Increased flexibility is achieved through a declarative and comprehensive monitoring language, a simple code instrumentation process, and automated monitoring administration. These elements substantially relieve the burden imposed by using on-line distributed monitoring systems. In addition, the monitoring system provides techniques to manage the trade-offs between various monitoring objectives. The proposed solution offers improvements over related works by presenting a comprehensive architecture that considers the requirements and implied objectives for monitoring large-scale distributed systems. This architecture is referred to as the HiFi monitoring system. To demonstrate effectiveness at debugging and steering LSD systems, the HiFi monitoring system has been implemented at the Old Dominion University for monitoring the Interactive Remote Instruction (IRI) system. The results from this case study validate that the HiFi system achieves the objectives outlined in this thesis

    Flexibilização em sistemas distribuídos: uma perspectiva holística

    Get PDF
    Doutoramento em Engenharia InformáticaEm sistemas distribuídos o paradigma utilizado para interacção entre tarefas é a troca de mensagens. Foram propostas várias abordagens que permitem a especificação do fluxo de dados entre tarefas, mas para sistemas de temporeal é necessário uma definição mais rigorosa destes fluxos de dados. Nomeadamente, tem de ser possível a especificação dos parâmetros das tarefas e das mensagens, e a derivação dos parâmetros não especificados. Uma tal abordagem poderia permitir o escalonamento e despacho automático de tarefas e de mensagens, ou pelo menos, poderia reduzir o número de iterações durante o desenho do sistema. Os fluxos de dados constituem uma abordagem possível ao escalonamento e despacho holístico em sistemas distribuídos de tempo-real, onde são realizadas diferentes tipos de análises que correlacionam os vários parâmetros. Os resultados podem ser utilizados para definir o nível de memória de suporte que é necessário em cada nodo do sistema distribuído. Em sistemas distribuídos baseados em FTT, é possível implementar um escalonamento holístico centralizado, no qual se consideram as interdependências entre tarefas produtoras/consumidoras e mensagens. O conjunto de restrições que garante a realização do sistema pode ser derivado dos parâmetros das tarefas e das mensagens, tais como os períodos e os tempos de execução/transmissão. Nesta tese, são estudadas duas perspectivas, uma perspectiva centrada na rede, i.e. em que o escalonamento de mensagens é feito antes do escalonamento de tarefas, e outra perspectiva centrada no nodo. Um mecanismo simples de despacho de tarefas e de mensagens para sistemas distribuídos baseados em CAN é também proposto neste trabalho. Este mecanismo estende o já existente em FTT para despacho de mensagens. O estudo da implementação deste mecanismo nos nodos deu origem à especificação de um núcleo de sistema operativo. Procurou-se que este introduzisse uma sobrecarga mínima de modo a poder ser incluído em nodos de baixo poder computacional. Neste trabalho, é apresentado um simulador, SimHol, para prever o cumprimento temporal da transmissão de mensagens e da execução das tarefas num sistema distribuído. As entradas para o simulador são os chamados fluxos de dados, que incluem as tarefas produtoras, as mensagens correspondentes e as tarefas que utilizam os dados transmitidos. Utilizando o tempo de execução no pior caso e o tempo de transmissão, o simulador é capaz de verificar se os limites temporais são cumpridos em cada nodo do sistema e na rede.In distributed systems the communication paradigm used for intertask interaction is the message exchange. Several approaches have been proposed that allow the specification of the data flow between tasks, but in real-time systems a more accurate definition of these data flows is mandatory. Namely, the specification of the required tasks’ and messages’ parameters and the derivation of the unspecified parameters have to be possible. Such an approach could allow an automatic scheduling and dispatching of tasks and messages or, at least, could reduce the number of iterations during the system’s design. The data streams present a possible approach to the holistic scheduling and dispatching in real-time distributed systems where different types of analysis that correlate the various parameters are done. The results can be used to define the level of buffering that is required at each node of the distributed system. In FTT-based distributed systems it is possible to implement a centralized holistic scheduling, taking into consideration the interdependences between producer/consumer tasks and messages. A set of constraints that guarantee the system feasibility can then be derived from tasks and messages’ parameters such as the periods and execution/transmission times. In this thesis the net-centric perspective, i.e., the one in which the scheduling of messages is done prior to the scheduling of tasks, and the node-centric perspectives are studied. A simple mechanism to dispatch tasks and messages for CAN-based distributed systems is also proposed in this work. This mechanism extends the one that exists in the FTT for the dispatching of messages. The study of the implementation of this mechanism in the nodes gave birth to the specification of a kernel. A goal for this kernel was to achieve a low overhead so that it could be included in nodes with low processing power. In this work a simulator to preview the timeliness of the transmission of messages and of the execution of tasks in a distributed system is presented. The inputs to the simulator are the so-called data streams, which include the producer tasks, the correspondent messages and the tasks that use the transmitted data. Using the worst-case execution time and transmission time, the simulator is able to verify if deadlines are fulfilled in every node of the system and in the network.Escola Superior de Tecnologia de Castelo BrancoPRODEP III, eixo 3, medida 5, acção 5.3FCTSAPIENS99 - POSI/SRI/34244/99IEETA da Universidade de AveiroARTIST - European Union Advanced Real Time System

    Towards quality programming in the automated testing of distributed applications

    Get PDF
    PhD ThesisSoftware testing is a very time-consuming and tedious activity and accounts for over 25% of the cost of software development. In addition to its high cost, manual testing is unpopular and often inconsistently executed. Software Testing Environments (STEs) overcome the deficiencies of manual testing through automating the test process and integrating testing tools to support a wide range of test capabilities. Most prior work on testing is in single-thread applications. This thesis is a contribution to testing of distributed applications, which has not been well explored. To address two crucial issues in testing, when to stop testing and how good the software is after testing, a statistics-based integrated test environment which is an extension of the testing concept in Quality Programming for distributed applications is presented. It provides automatic support for test execution by the Test Driver, test development by the SMAD Tree Editor and the Test Data Generator, test failure analysis by the Test Results Validator and the Test Paths Tracer, test measurement by the Quality Analyst, test management by the Test Manager and test planning by the Modeller. These tools are integrated around a public, shared data model describing the data entities and relationships which are manipulable by these tools. It enables early entry of the test process into the life cycle due to the definition of the quality planning and message-flow routings in the modelling. After well-prepared modelling and requirements specification are undertaken, the test process and the software design and implementation can proceed concurrently. A simple banking application written using Java Remote Method Invocation (RMI) and Java DataBase Connectivity (JDBC) shows the testing process of fitting it into the integrated test environment. The concept of the automated test execution through mobile agents across multiple platforms is also illustrated on this 3-tier client/server application.The National Science Council, Taiwan: The Ministry of National Defense, Taiwan

    Advancing Protocol Diversity in Network Security Monitoring

    Get PDF
    With information technology entering new fields and levels of deployment, e.g., in areas of energy, mobility, and production, network security monitoring needs to be able to cope with those environments and their evolution. However, state-of-the-art Network Security Monitors (NSMs) typically lack the necessary flexibility to handle the diversity of the packet-oriented layers below the abstraction of TCP/IP connections. In this work, we advance the software architecture of a network security monitor to facilitate the flexible integration of lower-layer protocol dissectors while maintaining required performance levels. We proceed in three steps: First, we identify the challenges for modular packet-level analysis, present a refined NSM architecture to address them and specify requirements for its implementation. Second, we evaluate the performance of data structures to be used for protocol dispatching, implement the proposed design into the popular open-source NSM Zeek and assess its impact on the monitor performance. Our experiments show that hash-based data structures for dispatching introduce a significant overhead while array-based approaches qualify for practical application. Finally, we demonstrate the benefits of the proposed architecture and implementation by migrating Zeek\u27s previously hard-coded stack of link and internet layer protocols to the new interface. Furthermore, we implement dissectors for non-IP based industrial communication protocols and leverage them to realize attack detection strategies from recent applied research. We integrate the proposed architecture into the Zeek open-source project and publish the implementation to support the scientific community as well as practitioners, promoting the transfer of research into practice
    corecore