18 research outputs found

    Cooperative Optimization QoS Cloud Routing Protocol Based on Bacterial Opportunistic Foraging and Chemotaxis Perception for Mobile Internet

    Get PDF
    In order to strengthen the mobile Internet mobility management and cloud platform resources utilization, optimizing the cloud routing efficiency is established, based on opportunistic bacterial foraging bionics, and puts forward a chemotaxis perception of collaborative optimization QoS (Quality of Services) cloud routing mechanism. The cloud routing mechanism is based on bacterial opportunity to feed and bacterial motility and to establish the data transmission and forwarding of the bacterial population behavior characteristics. This mechanism is based on the characteristics of drug resistance of bacteria and the structure of the field, and through many iterations of the individual behavior and population behavior the bacteria can be spread to the food gathering area with a certain probability. Finally, QoS cloud routing path would be selected and optimized based on bacterial bionic optimization and hedge mapping relationship between mobile Internet node and bacterial population evolution iterations. Experimental results show that, compared with the standard dynamic routing schemes, the proposed scheme has shorter transmission delay, lower packet error ratio, QoS cloud routing loading, and QoS cloud route request overhead

    Inferring hidden features in the Internet (PhD thesis)

    Full text link
    The Internet is a large-scale decentralized system that is composed of thousands of independent networks. In this system, there are two main components, interdomain routing and traffic, that are vital inputs for many tasks such as traffic engineering, security, and business intelligence. However, due to the decentralized structure of the Internet, global knowledge of both interdomain routing and traffic is hard to come by. In this dissertation, we address a set of statistical inference problems with the goal of extending the knowledge of the interdomain-level Internet. In the first part of this dissertation we investigate the relationship between the interdomain topology and an individual network’s inference ability. We first frame the questions through abstract analysis of idealized topologies, and then use actual routing measurements and topologies to study the ability of real networks to infer traffic flows. In the second part, we study the ability of networks to identify which paths flow through their network. We first discuss that answering this question is surprisingly hard due to the design of interdomain routing systems where each network can learn only a limited set of routes. Therefore, network operators have to rely on observed traffic. However, observed traffic can only identify that a particular route passes through its network but not that a route does not pass through its network. In order to solve the routing inference problem, we propose a nonparametric inference technique that works quite accurately. The key idea behind our technique is measuring the distances between destinations. In order to accomplish that, we define a metric called Routing State Distance (RSD) to measure distances in terms of routing similarity. Finally, in the third part, we study our new metric, RSD in detail. Using RSD we address an important and difficult problem of characterizing the set of paths between networks. The collection of the paths across networks is a great source to understand important phenomena in the Internet as path selections are driven by the economic and performance considerations of the networks. We show that RSD has a number of appealing properties that can discover these hidden phenomena

    Interdomain Route Leak Mitigation: A Pragmatic Approach

    Get PDF
    The Internet has grown to support many vital functions, but it is not administered by any central authority. Rather, the many smaller networks that make up the Internet - called Autonomous Systems (ASes) - independently manage their own distinct host address space and routing policy. Routers at the borders between ASes exchange information about how to reach remote IP prefixes with neighboring networks over the control plane with the Border Gateway Protocol (BGP). This inter-AS communication connects hosts across AS boundaries to build the illusion of one large, unified global network - the Internet. Unfortunately, BGP is a dated protocol that allows ASes to inject virtually any routing information into the control plane. The Internet’s decentralized administrative structure means that ASes lack visibility of the relationships and policies of other networks, and have little means of vetting the information they receive. Routes are global, connecting hosts around the world, but AS operators can only see routes exchanged between their own network and directly connected neighbor networks. This mismatch between global route scope and local network operator visibility gives rise to adverse routing events like route leaks, which occur when an AS advertises a route that should have been kept within its own network by mistake. In this work, we explore our thesis: that malicious and unintentional route leaks threaten Internet availability, but pragmatic solutions can mitigate their impact. Leaks effectively reroute traffic meant for the leak destination along the leak path. This diversion of flows onto unexpected paths can cause broad disruption for hosts attempting to reach the leak destination, as well as obstruct the normal traffic on the leak path. These events are usually due to misconfiguration and not malicious activity, but we show in our initial work that vrouting-capable adversaries can weaponize route leaks and fraudulent path advertisements to enhance data plane attacks on Internet infrastructure and services. Existing solutions like Internet Routing Registry (IRR) filtering have not succeeded in solving the route leak problem, as globally disruptive route leaks still periodically interrupt the normal functioning of the Internet. We examine one relatively new solution - Peerlocking or defensive AS PATH filtering - where ASes exchange toplogical information to secure their networks. Our measurements reveal that Peerlock is already deployed in defense of the largest ASes, but has found little purchase elsewhere. We conclude by introducing a novel leak defense system, Corelock, designed to provide Peerlock-like protection without the scalability concerns that have limited Peerlock’s scope. Corelock builds meaningful route leak filters from globally distributed route collectors and can be deployed without cooperation from other network

    Modelling and Design of Resilient Networks under Challenges

    Get PDF
    Communication networks, in particular the Internet, face a variety of challenges that can disrupt our daily lives resulting in the loss of human lives and significant financial costs in the worst cases. We define challenges as external events that trigger faults that eventually result in service failures. Understanding these challenges accordingly is essential for improvement of the current networks and for designing Future Internet architectures. This dissertation presents a taxonomy of challenges that can help evaluate design choices for the current and Future Internet. Graph models to analyse critical infrastructures are examined and a multilevel graph model is developed to study interdependencies between different networks. Furthermore, graph-theoretic heuristic optimisation algorithms are developed. These heuristic algorithms add links to increase the resilience of networks in the least costly manner and they are computationally less expensive than an exhaustive search algorithm. The performance of networks under random failures, targeted attacks, and correlated area-based challenges are evaluated by the challenge simulation module that we developed. The GpENI Future Internet testbed is used to conduct experiments to evaluate the performance of the heuristic algorithms developed

    Traffic Re-engineering: Extending Resource Pooling Through the Application of Re-feedback

    Get PDF
    Parallelism pervades the Internet, yet efficiently pooling this increasing path diversity has remained elusive. With no holistic solution for resource pooling, each layer of the Internet architecture attempts to balance traffic according to its own needs, potentially at the expense of others. From the edges, traffic is implicitly pooled over multiple paths by retrieving content from different sources. Within the network, traffic is explicitly balanced across multiple links through the use of traffic engineering. This work explores how the current architecture can be realigned to facilitate resource pooling at both network and transport layers, where tension between stakeholders is strongest. The central theme of this thesis is that traffic engineering can be performed more efficiently, flexibly and robustly through the use of re-feedback. A cross-layer architecture is proposed for sharing the responsibility for resource pooling across both hosts and network. Building on this framework, two novel forms of traffic management are evaluated. Efficient pooling of traffic across paths is achieved through the development of an in-network congestion balancer, which can function in the absence of multipath transport. Network and transport mechanisms are then designed and implemented to facilitate path fail-over, greatly improving resilience without requiring receiver side cooperation. These contributions are framed by a longitudinal measurement study which provides evidence for many of the design choices taken. A methodology for scalably recovering flow metrics from passive traces is developed which in turn is systematically applied to over five years of interdomain traffic data. The resulting findings challenge traditional assumptions on the preponderance of congestion control on resource sharing, with over half of all traffic being constrained by limits other than network capacity. All of the above represent concerted attempts to rethink and reassert traffic engineering in an Internet where competing solutions for resource pooling proliferate. By delegating responsibilities currently overloading the routing architecture towards hosts and re-engineering traffic management around the core strengths of the network, the proposed architectural changes allow the tussle surrounding resource pooling to be drawn out without compromising the scalability and evolvability of the Internet

    Future Internet Routing Design for Massive Failures and Attacks

    Get PDF
    Given the high complexity and increasing traffic load of the Internet, geo-correlated challenges caused by large-scale disasters or malicious attacks pose a significant threat to dependable network communications. To understand its characteristics, we propose a critical-region identification mechanism and incorporate its result into a new graph resilience metric, compensated Total Geographical Graph Diversity. Our metric is capable of characterizing and differentiating resiliency levels for different physical topologies. We further analyze the mechanisms attackers could exploit to maximize the damage and demonstrate the effectiveness of a network restoration plan. Based on the geodiversity in topologies, we present the path geodiverse problem and two heuristics to solve it more efficiently compared to the optimal algorithm. We propose the flow geodiverse problem and two optimization formulations to study the tradeoff among cost, end-to-end delay, and path skew with multipath forwarding. We further integrate the solution to above models into our cross-layer resilient protocol stack, ResTP–GeoDivRP. Our protocol stack is prototyped and implemented in the network simulator ns-3 and emulated in our KanREN testbed. By providing multiple GeoPaths, our protocol stack provides better path restoration performance than Multipath TCP

    Naming and discovery in networks : architecture and economics

    Get PDF
    In less than three decades, the Internet was transformed from a research network available to the academic community into an international communication infrastructure. Despite its tremendous success, there is a growing consensus in the research community that the Internet has architectural limitations that need to be addressed in a effort to design a future Internet. Among the main technical limitations are the lack of mobility support, and the lack of security and trust. The Internet, and particularly TCP/IP, identifies endpoints using a location/routing identifier, the IP address. Coupling the endpoint identifier to the location identifier hinders mobility and poorly identifies the actual endpoint. On the other hand, the lack of security has been attributed to limitations in both the network and the endpoint. Authentication for example is one of the main concerns in the architecture and is hard to implement partly due to lack of identity support. The general problem that this dissertation is concerned with is that of designing a future Internet. Towards this end, we focus on two specific sub-problems. The first problem is the lack of a framework for thinking about architectures and their design implications. It was obvious after surveying the literature that the majority of the architectural work remains idiosyncratic and descriptions of network architectures are mostly idiomatic. This has led to the overloading of architectural terms, and to the emergence of a large body of network architecture proposals with no clear understanding of their cross similarities, compatibility points, their unique properties, and architectural performance and soundness. On the other hand, the second problem concerns the limitations of traditional naming and discovery schemes in terms of service differentiation and economic incentives. One of the recurring themes in the community is the need to separate an entity\u27s identifier from its locator to enhance mobility and security. Separation of identifier and locator is a widely accepted design principle for a future Internet. Separation however requires a process to translate from the identifier to the locator when discovering a network path to some identified entity. We refer to this process as identifier-based discovery, or simply discovery, and we recognize two limitations that are inherent in the design of traditional discovery schemes. The first limitation is the homogeneity of the service where all entities are assumed to have the same discovery performance requirements. The second limitation is the inherent incentive mismatch as it relates to sharing the cost of discovery. This dissertation addresses both subproblems, the architectural framework as well as the naming and discovery limitations

    Nature-inspired survivability: Prey-inspired survivability countermeasures for cloud computing security challenges

    Get PDF
    As cloud computing environments become complex, adversaries have become highly sophisticated and unpredictable. Moreover, they can easily increase attack power and persist longer before detection. Uncertain malicious actions, latent risks, Unobserved or Unobservable risks (UUURs) characterise this new threat domain. This thesis proposes prey-inspired survivability to address unpredictable security challenges borne out of UUURs. While survivability is a well-addressed phenomenon in non-extinct prey animals, applying prey survivability to cloud computing directly is challenging due to contradicting end goals. How to manage evolving survivability goals and requirements under contradicting environmental conditions adds to the challenges. To address these challenges, this thesis proposes a holistic taxonomy which integrate multiple and disparate perspectives of cloud security challenges. In addition, it proposes the TRIZ (Teorija Rezbenija Izobretatelskib Zadach) to derive prey-inspired solutions through resolving contradiction. First, it develops a 3-step process to facilitate interdomain transfer of concepts from nature to cloud. Moreover, TRIZ’s generic approach suggests specific solutions for cloud computing survivability. Then, the thesis presents the conceptual prey-inspired cloud computing survivability framework (Pi-CCSF), built upon TRIZ derived solutions. The framework run-time is pushed to the user-space to support evolving survivability design goals. Furthermore, a target-based decision-making technique (TBDM) is proposed to manage survivability decisions. To evaluate the prey-inspired survivability concept, Pi-CCSF simulator is developed and implemented. Evaluation results shows that escalating survivability actions improve the vitality of vulnerable and compromised virtual machines (VMs) by 5% and dramatically improve their overall survivability. Hypothesis testing conclusively supports the hypothesis that the escalation mechanisms can be applied to enhance the survivability of cloud computing systems. Numeric analysis of TBDM shows that by considering survivability preferences and attitudes (these directly impacts survivability actions), the TBDM method brings unpredictable survivability information closer to decision processes. This enables efficient execution of variable escalating survivability actions, which enables the Pi-CCSF’s decision system (DS) to focus upon decisions that achieve survivability outcomes under unpredictability imposed by UUUR

    Compilation of thesis abstracts, June 2007

    Get PDF
    NPS Class of June 2007This quarter’s Compilation of Abstracts summarizes cutting-edge, security-related research conducted by NPS students and presented as theses, dissertations, and capstone reports. Each expands knowledge in its field.http://archive.org/details/compilationofsis109452750
    corecore