208 research outputs found

    A mid-level framework for independent network services configuration management

    Get PDF
    Tese doutoramento do Programa Doutoral em TelecomunicaçõesDecades of evolution in communication network’s resulted in a high diversity of solutions, not only in terms of network elements but also in terms of the way they are managed. From a management perspective, having heterogeneous elements was a feasible scenario over the last decades, where management activities were mostly considered as additional features. However, with the most recent advances on network technology, that includes proposals for future Internet as well as requirements for automation, scale and efficiency, new management methods are required and integrated network management became an essential issue. Most recent solutions aiming to integrate the management of heterogeneous network elements, rely on the application of semantic data translations to obtain a common representation between heterogeneous managed elements, thus enabling their management integration. However, the realization of semantic translations is very complex to be effectively achieved, requiring extensive processing of data to find equivalent representation, besides requiring the administrator’s intervention to create and validate conversions, since contemporary data models lack a formal semantic representation. From these constrains a research question arose: Is it possible to integrate the con g- uration management of heterogeneous network elements overcoming the use of manage- ment translations? In this thesis the author uses a network service abstraction to propose a framework for network service management, which comprehends the two essential management operations: monitoring and configuring. This thesis focus on describing and experimenting the subsystem responsible for the network services configurations management, named Mid-level Network Service Configuration (MiNSC), being the thesis most important contribution. The MiNSC subsystem proposes a new configuration management interface for integrated network service management based on standard technologies that includes an universal information model implemented on unique data models. This overcomes the use of management translations while providing advanced management functionalities, only available in more advanced research projects, that includes scalability and resilience improvement methods. Such functionalities are provided by using a two-layer distributed architecture, as well as over-provisioning of network elements. To demonstrate MiNSC’s management capabilities, a group of experiments was conducted, that included, configuration deployment, instance migration and expansion using a DNS management system as test bed. Since MiNSC represents a new architectural approach, with no direct reference for a quantitative evaluation, a theoretical analysis was conducted in order to evaluate it against important integrated network management perspectives. It was concluded that there is a tendency to apply management translations, being the most straightforward solution when integrating the management of heterogeneous management interfaces and/or data models. However, management translations are very complex to be realized, being its effectiveness questionable for highly heterogeneous environments. The implementation of MiNSC’s standard configuration management interface provides a simplified perspective that, by using universal configurations, removes translations from the management system. Its distributed architecture uses independent/universal configurations and over-provisioning of network elements to improve the service’s resilience and scalability, enabling as well a more efficient resource management by dynamically allocating resources as needed

    A Hierarchical Filtering-Based Monitoring Architecture for Large-scale Distributed Systems

    Get PDF
    On-line monitoring is essential for observing and improving the reliability and performance of large-scale distributed (LSD) systems. In an LSD environment, large numbers of events are generated by system components during their execution and interaction with external objects (e.g. users or processes). These events must be monitored to accurately determine the run-time behavior of an LSD system and to obtain status information that is required for debugging and steering applications. However, the manner in which events are generated in an LSD system is complex and represents a number of challenges for an on-line monitoring system. Correlated events axe generated concurrently and can occur at multiple locations distributed throughout the environment. This makes monitoring an intricate task and complicates the management decision process. Furthermore, the large number of entities and the geographical distribution inherent with LSD systems increases the difficulty of addressing traditional issues, such as performance bottlenecks, scalability, and application perturbation. This dissertation proposes a scalable, high-performance, dynamic, flexible and non-intrusive monitoring architecture for LSD systems. The resulting architecture detects and classifies interesting primitive and composite events and performs either a corrective or steering action. When appropriate, information is disseminated to management applications, such as reactive control and debugging tools. The monitoring architecture employs a novel hierarchical event filtering approach that distributes the monitoring load and limits event propagation. This significantly improves scalability and performance while minimizing the monitoring intrusiveness. The architecture provides dynamic monitoring capabilities through: subscription policies that enable applications developers to add, delete and modify monitoring demands on-the-fly, an adaptable configuration that accommodates environmental changes, and a programmable environment that facilitates development of self-directed monitoring tasks. Increased flexibility is achieved through a declarative and comprehensive monitoring language, a simple code instrumentation process, and automated monitoring administration. These elements substantially relieve the burden imposed by using on-line distributed monitoring systems. In addition, the monitoring system provides techniques to manage the trade-offs between various monitoring objectives. The proposed solution offers improvements over related works by presenting a comprehensive architecture that considers the requirements and implied objectives for monitoring large-scale distributed systems. This architecture is referred to as the HiFi monitoring system. To demonstrate effectiveness at debugging and steering LSD systems, the HiFi monitoring system has been implemented at the Old Dominion University for monitoring the Interactive Remote Instruction (IRI) system. The results from this case study validate that the HiFi system achieves the objectives outlined in this thesis

    Supporting distributed computation over wide area gigabit networks

    Get PDF
    The advent of high bandwidth fibre optic links that may be used over very large distances has lead to much research and development in the field of wide area gigabit networking. One problem that needs to be addressed is how loosely coupled distributed systems may be built over these links, allowing many computers worldwide to take part in complex calculations in order to solve "Grand Challenge" problems. The research conducted as part of this PhD has looked at the practicality of implementing a communication mechanism proposed by Craig Partridge called Late-binding Remote Procedure Calls (LbRPC). LbRPC is intended to export both code and data over the network to remote machines for evaluation, as opposed to traditional RPC mechanisms that only send parameters to pre-existing remote procedures. The ability to send code as well as data means that LbRPC requests can overcome one of the biggest problems in Wide Area Distributed Computer Systems (WADCS): the fixed latency due to the speed of light. As machines get faster, the fixed multi-millisecond round trip delay equates to ever increasing numbers of CPU cycles. For a WADCS to be efficient, programs should minimise the number of network transits they incur. By allowing the application programmer to export arbitrary code to the remote machine, this may be achieved. This research has looked at the feasibility of supporting secure exportation of arbitrary code and data in heterogeneous, loosely coupled, distributed computing environments. It has investigated techniques for making placement decisions for the code in cases where there are a large number of widely dispersed remote servers that could be used. The latter has resulted in the development of a novel prototype LbRPC using multicast IP for implicit placement and a sequenced, multi-packet saturation multicast transport protocol. These prototypes show that it is possible to export code and data to multiple remote hosts, thereby removing the need to perform complex and error prone explicit process placement decisions

    Internet Traffic Engineering : An Artificial Intelligence Approach

    Get PDF
    Dissertação de Mestrado em Ciência de Computadores, apresentada à Faculdade de Ciências da Universidade do Port

    Application of policy-based techniques to process-oriented IT Service Management

    Get PDF

    Intrusion-Tolerant Middleware: the MAFTIA approach

    Get PDF
    The pervasive interconnection of systems all over the world has given computer services a significant socio-economic value, which can be affected both by accidental faults and by malicious activity. It would be appealing to address both problems in a seamless manner, through a common approach to security and dependability. This is the proposal of intrusion tolerance, where it is assumed that systems remain to some extent faulty and/or vulnerable and subject to attacks that can be successful, the idea being to ensure that the overall system nevertheless remains secure and operational. In this paper, we report some of the advances made in the European project MAFTIA, namely in what concerns a basis of concepts unifying security and dependability, and a modular and versatile architecture, featuring several intrusion-tolerant middleware building blocks. We describe new architectural constructs and algorithmic strategies, such as: the use of trusted components at several levels of abstraction; new randomization techniques; new replica control and access control algorithms. The paper concludes by exemplifying the construction of intrusion-tolerant applications on the MAFTIA middleware, through a transaction support servic

    Topology-Aware Vulnerability Mitigation Worms

    Get PDF
    In very dynamic Information and Communication Technology (ICT) infrastructures, with rapidly growing applications, malicious intrusions have become very sophisticated, effective, and fast. Industries have suffered billions of US dollars losses due only to malicious worm outbreaks. Several calls have been issued by governments and industries to the research community to propose innovative solutions that would help prevent malicious breaches, especially with enterprise networks becoming more complex, large, and volatile. In this thesis we approach self-replicating, self-propagating, and self-contained network programs (i.e. worms) as vulnerability mitigation mechanisms to eliminate threats to networks. These programs provide distinctive features, including: Short distance communication with network nodes, intermittent network node vulnerability probing, and network topology discovery. Such features become necessary, especially for networks with frequent node association and disassociation, dynamically connected links, and where hosts concurrently run multiple operating systems. We propose -- to the best of our knowledge -- the first computer worm that utilize the second layer of the OSI model (Data Link Layer) as its main propagation medium. We name our defensive worm Seawave, a controlled interactive, self-replicating, self-propagating, and self-contained vulnerability mitigation mechanism. We develop, experiment, and evaluate Seawave under different simulation environments that mimic to a large extent enterprise networks. We also propose a threat analysis model to help identify weaknesses, strengths, and threats within and towards our vulnerability mitigation mechanism, followed by a mathematical propagation model to observe Seawave's performance under large scale enterprise networks. We also preliminary propose another vulnerability mitigation worm that utilizes the Link Layer Discovery Protocol (LLDP) for its propagation, along with an evaluation of its performance. In addition, we describe a preliminary taxonomy that rediscovers the relationship between different types of self-replicating programs (i.e. viruses, worms, and botnets) and redefines these programs based on their properties. The taxonomy provides a classification that can be easily applied within the industry and the research community and paves the way for a promising research direction that would consider the defensive side of self-replicating programs
    • …
    corecore