47 research outputs found

    Formal Analysis of Network Protocols

    Get PDF
    Today’s Internet is becoming increasingly complex and fragile. Current performance centric techniques on network analysis and runtime verification have became inadequate in the development of robust networks. To cope with these challenges there is a growing interest in the use of formal analysis techniques to reason about network protocol correctness throughout the network development cycle. This talk surveys recent work on the use of formal analysis techniques to aid in design, implementation, and analysis of network protocols. We first present a general framework that covers a majority of existing formal analysis techniques on both the control and routing planes of networks, and present a classification and taxonomy of techniques according to the proposed framework. Using four representative case studies (Metarouting, rcc, axiomatic formulation, and Alloy based analysis), we discuss various aspects of formal network analysis, including formal specification, formal verification, and system validation. Their strengths and limitations are evaluated and compared in detail

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial

    Fault diagnosis for IP-based network with real-time conditions

    Get PDF
    BACKGROUND: Fault diagnosis techniques have been based on many paradigms, which derive from diverse areas and have different purposes: obtaining a representation model of the network for fault localization, selecting optimal probe sets for monitoring network devices, reducing fault detection time, and detecting faulty components in the network. Although there are several solutions for diagnosing network faults, there are still challenges to be faced: a fault diagnosis solution needs to always be available and able enough to process data timely, because stale results inhibit the quality and speed of informed decision-making. Also, there is no non-invasive technique to continuously diagnose the network symptoms without leaving the system vulnerable to any failures, nor a resilient technique to the network's dynamic changes, which can cause new failures with different symptoms. AIMS: This thesis aims to propose a model for the continuous and timely diagnosis of IP-based networks faults, independent of the network structure, and based on data analytics techniques. METHOD(S): This research's point of departure was the hypothesis of a fault propagation phenomenon that allows the observation of failure symptoms at a higher network level than the fault origin. Thus, for the model's construction, monitoring data was collected from an extensive campus network in which impact link failures were induced at different instants of time and with different duration. These data correspond to widely used parameters in the actual management of a network. The collected data allowed us to understand the faults' behavior and how they are manifested at a peripheral level. Based on this understanding and a data analytics process, the first three modules of our model, named PALADIN, were proposed (Identify, Collection and Structuring), which define the data collection peripherally and the necessary data pre-processing to obtain the description of the network's state at a given moment. These modules give the model the ability to structure the data considering the delays of the multiple responses that the network delivers to a single monitoring probe and the multiple network interfaces that a peripheral device may have. Thus, a structured data stream is obtained, and it is ready to be analyzed. For this analysis, it was necessary to implement an incremental learning framework that respects networks' dynamic nature. It comprises three elements, an incremental learning algorithm, a data rebalancing strategy, and a concept drift detector. This framework is the fourth module of the PALADIN model named Diagnosis. In order to evaluate the PALADIN model, the Diagnosis module was implemented with 25 different incremental algorithms, ADWIN as concept-drift detector and SMOTE (adapted to streaming scenario) as the rebalancing strategy. On the other hand, a dataset was built through the first modules of the PALADIN model (SOFI dataset), which means that these data are the incoming data stream of the Diagnosis module used to evaluate its performance. The PALADIN Diagnosis module performs an online classification of network failures, so it is a learning model that must be evaluated in a stream context. Prequential evaluation is the most used method to perform this task, so we adopt this process to evaluate the model's performance over time through several stream evaluation metrics. RESULTS: This research first evidences the phenomenon of impact fault propagation, making it possible to detect fault symptoms at a monitored network's peripheral level. It translates into non-invasive monitoring of the network. Second, the PALADIN model is the major contribution in the fault detection context because it covers two aspects. An online learning model to continuously process the network symptoms and detect internal failures. Moreover, the concept-drift detection and rebalance data stream components which make resilience to dynamic network changes possible. Third, it is well known that the amount of available real-world datasets for imbalanced stream classification context is still too small. That number is further reduced for the networking context. The SOFI dataset obtained with the first modules of the PALADIN model contributes to that number and encourages works related to unbalanced data streams and those related to network fault diagnosis. CONCLUSIONS: The proposed model contains the necessary elements for the continuous and timely diagnosis of IPbased network faults; it introduces the idea of periodical monitorization of peripheral network elements and uses data analytics techniques to process it. Based on the analysis, processing, and classification of peripherally collected data, it can be concluded that PALADIN achieves the objective. The results indicate that the peripheral monitorization allows diagnosing faults in the internal network; besides, the diagnosis process needs an incremental learning process, conceptdrift detection elements, and rebalancing strategy. The results of the experiments showed that PALADIN makes it possible to learn from the network manifestations and diagnose internal network failures. The latter was verified with 25 different incremental algorithms, ADWIN as concept-drift detector and SMOTE (adapted to streaming scenario) as the rebalancing strategy. This research clearly illustrates that it is unnecessary to monitor all the internal network elements to detect a network's failures; instead, it is enough to choose the peripheral elements to be monitored. Furthermore, with proper processing of the collected status and traffic descriptors, it is possible to learn from the arriving data using incremental learning in cooperation with data rebalancing and concept drift approaches. This proposal continuously diagnoses the network symptoms without leaving the system vulnerable to failures while being resilient to the network's dynamic changes.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: José Manuel Molina López.- Secretario: Juan Carlos Dueñas López.- Vocal: Juan Manuel Corchado Rodrígue

    Unifying Static And Runtime Analysis In Declarative Distributed Systems

    Get PDF
    Today’s distributed systems are becoming increasingly complex, due to the ever-growing number of network devices and their variety. The complexity makes it hard for system administrators to correctly configure distributed systems. This motivates the need for effective analytic tools that can help ensure correctness of distributed systems. One challenge in ensuring correctness is that there does not exist one solution that works for all properties. One type of properties, such as security properties, are so critical that they demand pre-deployment verification (i.e., static analysis) which, though time-consuming, explores the whole execution space. However, due to the potential problem of state explosion, static verification of all properties is not practical, and not necessary. Violation of non-critical properties, such as correct routing with shortest paths, is tolerable during execution and can be diagnosed after errors occur (i.e., runtime analysis), a more light-weight approach compared to verification. This dissertation presents STRANDS, a declarative framework that enables users to perform both pre-deployment verification and post-deployment diagnostics on top of declarative specification of distributed systems. STRANDS uses Network Datalog (NDlog), a distributed variant of Datalog query language, to specify network protocols and services. STRANDS has two components: a system verifier and a system debugger. The verifier allows the user to rigorously prove safety properties of network protocols and services, using either the program logic or symbolic execution we develop for NDlog programs. The debugger, on the other hand, facilitates diagnosis of system errors by allowing for querying of the structured history of network execution (i.e., network provenance) that is maintained in a storage-efficient manner. We show the effectiveness of STRANDS by evaluating both the verifier and the debugger. Using the verifier, we prove path authenticity of secure routing protocols, and verify a number of safety properties in software-defined networking (SDN). Also, we demonstrate that our provenance maintenance algorithm achieves significant storage reduction, while incurring negligible network overhead

    Data center resilience assessment : storage, networking and security.

    Get PDF
    Data centers (DC) are the core of the national cyber infrastructure. With the incredible growth of critical data volumes in financial institutions, government organizations, and global companies, data centers are becoming larger and more distributed posing more challenges for operational continuity in the presence of experienced cyber attackers and occasional natural disasters. The main objective of this research work is to present a new methodology for data center resilience assessment, this methodology consists of: • Define Data center resilience requirements. • Devise a high level metric for data center resilience. • Design and develop a tool to validate and the metric. Since computer networks are an important component in the data center architecture, this research work was extended to investigate computer network resilience enhancement opportunities within the area of routing protocols, redundancy, and server load to minimize the network down time and increase the time period of resisting attacks. Data center resilience assessment is a complex process as it involves several aspects such as: policies for emergencies, recovery plans, variation in data center operational roles, hosted/processed data types and data center architectures. However, in this dissertation, storage, networking and security are emphasized. The need for resilience assessment emerged due to the gap in existing reliability, availability, and serviceability (RAS) measures. Resilience as an evaluation metric leads to better proactive perspective in system design and management. The proposed Data center resilience assessment portal (DC-RAP) is designed to easily integrate various operational scenarios. DC-RAP features a user friendly interface to assess the resilience in terms of performance analysis and speed recovery by collecting the following information: time to detect attacks, time to resist, time to fail and recovery time. Several set of experiments were performed, results obtained from investigating the impact of routing protocols, server load balancing algorithms on network resilience, showed that using particular routing protocol or server load balancing algorithm can enhance network resilience level in terms of minimizing the downtime and ensure speed recovery. Also experimental results for investigating the use social network analysis (SNA) for identifying important router in computer network showed that the SNA was successful in identifying important routers. This important router list can be used to redundant those routers to ensure high level of resilience. Finally, experimental results for testing and validating the data center resilience assessment methodology using the DC-RAP showed the ability of the methodology quantify data center resilience in terms of providing steady performance, minimal recovery time and maximum resistance-attacks time. The main contributions of this work can be summarized as follows: • A methodology for evaluation data center resilience has been developed. • Implemented a Data Center Resilience Assessment Portal (D$-RAP) for resilience evaluations. • Investigated the usage of Social Network Analysis to Improve the computer network resilience

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    Foutbestendige toekomstige internetarchitecturen

    Get PDF

    Abstracting network policies

    Get PDF
    Almost every human activity in recent years relies either directly or indirectly on the smooth and efficient operation of the Internet. The Internet is an interconnection of multiple autonomous networks that work based on agreed upon policies between various institutions across the world. The network policies guiding an institution’s computer infrastructure both internally (such as firewall relationships) and externally (such as routing relationships) are developed by a diverse group of lawyers, accountants, network administrators, managers amongst others. Network policies developed by this group of individuals are usually done on a white-board in a graph-like format. It is however the responsibility of network administrators to translate and configure the various network policies that have been agreed upon. The configuration of these network policies are generally done on physical devices such as routers, domain name servers, firewalls and other middle boxes. The manual configuration process of such network policies is known to be tedious, time consuming and prone to human error which can lead to various network anomalies in the configuration commands. In recent years, many research projects and corporate organisations have to some level abstracted the network management process with emphasis on network devices (such as Cisco VIRL) or individual network policies (such as Propane). [Continues.]</div

    Flexible network management in software defined wireless sensor networks for monitoring application systems

    Get PDF
    Wireless Sensor Networks (WSNs) are the commonly applied information technologies of modern networking and computing platforms for application-specific systems. Today’s network computing applications are faced with high demand of reliable and powerful network functionalities. Hence, efficient network performance is central to the entire ecosystem, more especially where human life is a concern. However, effective management of WSNs remains a challenge due to problems supplemental to them. As a result, WSNs application systems such as in monitored environments, surveillance, aeronautics, medicine, processing and control, tend to suffer in terms of capacity to support compute intensive services due to limitations experienced on them. A recent technology shift proposes Software Defined Networking (SDN) for improving computing networks as well as enhancing network resource management, especially for life guarding systems. As an optimization strategy, a software-oriented approach for WSNs, known as Software Defined Wireless Sensor Network (SDWSN) is implemented to evolve, enhance and provide computing capacity to these resource constrained technologies. Software developmental strategies are applied with the focus to ensure efficient network management, introduce network flexibility and advance network innovation towards the maximum operation potential for WSNs application systems. The need to develop WSNs application systems which are powerful and scalable has grown tremendously due to their simplicity in implementation and application. Their nature of design serves as a potential direction for the much anticipated and resource abundant IoT networks. Information systems such as data analytics, shared computing resources, control systems, big data support, visualizations, system audits, artificial intelligence (AI), etc. are a necessity to everyday life of consumers. Such systems can greatly benefit from the SDN programmability strategy, in terms of improving how data is mined, analysed and committed to other parts of the system for greater functionality. This work proposes and implements SDN strategies for enhancing WSNs application systems especially for life critical systems. It also highlights implementation considerations for designing powerful WSNs application systems by focusing on system critical aspects that should not be disregarded when planning to improve core network functionalities. Due to their inherent challenges, WSN application systems lack robustness, reliability and scalability to support high computing demands. Anticipated systems must have greater capabilities to ubiquitously support many applications with flexible resources that can be easily accessed. To achieve this, such systems must incorporate powerful strategies for efficient data aggregation, query computations, communication and information presentation. The notion of applying machine learning methods to WSN systems is fairly new, though carries the potential to enhance WSN application technologies. This technological direction seeks to bring intelligent functionalities to WSN systems given the characteristics of wireless sensor nodes in terms of cooperative data transmission. With these technological aspects, a technical study is therefore conducted with a focus on WSN application systems as to how SDN strategies coupled with machine learning methods, can contribute with viable solutions on monitoring application systems to support and provide various applications and services with greater performance. To realize this, this work further proposes and implements machine learning (ML) methods coupled with SDN strategies to; enhance sensor data aggregation, introduce network flexibility, improve resource management, query processing and sensor information presentation. Hence, this work directly contributes to SDWSN strategies for monitoring application systems.Thesis (PhD)--University of Pretoria, 2018.National Research Foundation (NRF)Telkom Centre of ExcellenceElectrical, Electronic and Computer EngineeringPhDUnrestricte

    Transport API development and validation for the Netphony environment

    Full text link
    This final degree project contributes to the development and implementation of the connectivity service and the topology service of the Transport API (T-API) in the set of Netphony modules, developed by Telefónica I + D, and then the validation of that. Netphony follows an Application-Based Network Operations (ABNO) architecture which can be defined as an architecture based on collaboration between different elements, for automation processes in the management of a network , as configuration of LSP routes, which makes the network more scalable and dynamic, where the main elements are the ABNO controller and the PCE. This is the purpose with software defined networks (SDN), to achieve a fully programmable network and the ability to modify any demand of the network automatically. Validation and performance tests with the Netphony controller have been performed in a GMPLS nodes emulated environment, with OSPF and RSVP to carry out the configuration of the LSPs between the nodes, thus conforming the emulated environment of Netphony GMPLS. The T-API standard meets the requirements to become the NBI (North Bound interface) of the Netphony. The main characteristics of this standard is its simplicity and usability to extend it in different types of transport networks. This work focuses on the implementation for optical networks. Documentation of the technologies used and the current state of the same is the first part of this work, to then present how the T-API has been integrated into the Netphony controller, use cases and definition of the validation tests. Finally, the implemented code has been compiled, the creation of LSPs has been configured and a performance evaluation has been carried out
    corecore