36 research outputs found

    Travelling Without Moving: Discovering Neighborhood Adjacencies

    Full text link
    peer reviewedSince the early 2000's, the research community has explored many approaches to discover and study the Internet topology, designing both data collection mechanisms and models. In this paper, we introduce SAGE (Subnet AggrEgation), a new topology discovery tool that infers the hop-level graph of a target network from a single vantage point. SAGE relies on subnet-level data to build a directed acyclic graph of a network modeling how its (meshes of) routers, a.k.a. neighborhoods, are linked together. Using two groundtruth networks and measurements in the wild, we show SAGE accurately discovers links and is consistent with itself upon a change of vantage point. By mapping subnets to the discovered links, the directed acyclic graphs discovered by SAGE can be re-interpreted as bipartite graphs. Using data collected in the wild from both the PlanetLab testbed and the EdgeNet cluster, we demonstrate that such a model is a credible tool for studying computer networks

    Verifying responsiveness for open systems by means of conformance checking

    Get PDF

    Robustness of Defenses against Deception Attacks

    Get PDF

    Rethinking Routing and Peering in the era of Vertical Integration of Network Functions

    Get PDF
    Content providers typically control the digital content consumption services and are getting the most revenue by implementing an all-you-can-eat model via subscription or hyper-targeted advertisements. Revamping the existing Internet architecture and design, a vertical integration where a content provider and access ISP will act as unibody in a sugarcane form seems to be the recent trend. As this vertical integration trend is emerging in the ISP market, it is questionable if existing routing architecture will suffice in terms of sustainable economics, peering, and scalability. It is expected that the current routing will need careful modifications and smart innovations to ensure effective and reliable end-to-end packet delivery. This involves new feature developments for handling traffic with reduced latency to tackle routing scalability issues in a more secure way and to offer new services at cheaper costs. Considering the fact that prices of DRAM or TCAM in legacy routers are not necessarily decreasing at the desired pace, cloud computing can be a great solution to manage the increasing computation and memory complexity of routing functions in a centralized manner with optimized expenses. Focusing on the attributes associated with existing routing cost models and by exploring a hybrid approach to SDN, we also compare recent trends in cloud pricing (for both storage and service) to evaluate whether it would be economically beneficial to integrate cloud services with legacy routing for improved cost-efficiency. In terms of peering, using the US as a case study, we show the overlaps between access ISPs and content providers to explore the viability of a future in terms of peering between the new emerging content-dominated sugarcane ISPs and the healthiness of Internet economics. To this end, we introduce meta-peering, a term that encompasses automation efforts related to peering – from identifying a list of ISPs likely to peer, to injecting control-plane rules, to continuous monitoring and notifying any violation – one of the many outcroppings of vertical integration procedure which could be offered to the ISPs as a standalone service

    Annual Report 2014

    No full text

    Vitruv: Specifying Temporal Aspects of Multimedia Presentations - A Transformational Approach based on Intervals

    Get PDF
    The development of large multimedia applications reveals similar problems to those of developing large software systems. This is not surprising, as multimedia applications are a special kind of software systems. Our experience within the Altenberg Cathedral Project showed, however, that during developing multimedia applications particular problems arise, which do not appear during traditional software development. This is the starting point of the research reported in this thesis. In this introduction, we start with a report on the Altenberg Cathedral Project (sec. 1.1), resulting in a problem statement and a list of requirements for possible solutions. After that we propose our solution named Vitruv (sec. 1.2 on page 11) and explain how it works in general (sec. 1.3 on page 12). It is followed by a discussion of key aspects of Vitruv and relations to other approaches (sec. 1.4 on page 14). The introduction closes with a brief outline of the thesis

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 12224 and 12225 constitutes the refereed proceedings of the 32st International Conference on Computer Aided Verification, CAV 2020, held in Los Angeles, CA, USA, in July 2020.* The 43 full papers presented together with 18 tool papers and 4 case studies, were carefully reviewed and selected from 240 submissions. The papers were organized in the following topical sections: Part I: AI verification; blockchain and Security; Concurrency; hardware verification and decision procedures; and hybrid and dynamic systems. Part II: model checking; software verification; stochastic systems; and synthesis. *The conference was held virtually due to the COVID-19 pandemic

    Foundations for practical network verification

    Get PDF
    Computer networks are large and complex and the often manual process of configuring such systems is error-prone, leading to network outages and breaches. This has ignited research into network verification tools that given a set of operator intents, automatically check whether the configured network satisfies the intents. In this dissertation, we argue that existing works in this area have important limitations that prevent their widespread adoption in the real world. We set to address these limitations by revisiting the main aspects of network verification: verification framework, intent specification, and network modeling. First, we develop #PEC, a symbolic packet header analysis framework that resolves the tension between expressiveness and efficiency in previous works. We provide an extensible library of efficient match-types that allows encoding and analyzing more types of forwarding rules (e.g. Linux iptables) compared to most previous works. Similar to the state-of-the-art, #PEC partitions the space of packet headers into a set of equivalence classes (PECs) before the analysis. However, it uses a lattice-based approach to do so, refraining from using computationally expensive negation and subtraction operations. Our experiments with a broad range of real-world datasets show that #PEC is 10× faster than similarly expressive state-of-the-art. We also demonstrate how empty PECs in previous works lead to unsound/incomplete analysis and develop a counting-based method to eliminate empty PECs from #PEC that outperforms baseline approaches by 10 − 100×. Next, we note that network verification requires formal specifications of the intents of the network operator as a starting point, which are almost never available or even known in a complete form. We mitigate this problem by providing a framework to utilize existing low-level network behavior to infer the high-level intents. We design Anime, a system that given observed packet forwarding behavior, mines a compact set of possible intents that best describe the observations. Anime accomplishes this by applying optimized clustering algorithms to a set of observed network paths, encoded using path features with hierarchical values that yield a way to control the precision-recall tradeoff. The resulting inferred intents can be used as input to verification/synthesis tools for continued maintenance. They can also be viewed as a summary of network behavior, and as a way to find anomalous behavior. Our experiments, including data from an operational network, demonstrate that Anime produces higher quality (F-score) intents than past work, can generate compact summaries with minimal loss of precision, is resilient to imperfect input and policy changes, scales to large networks, and finds actionable anomalies in an operational network. Finally, we turn our attention to modeling networking devices. We envision basing data plane analysis on P4 as the modeling language. Unlike most tools, we believe P4 analysis must be based on a precise model of the language rather than its informal specification. To this end, we develop a formal operational semantics of the P4 language during the process of which we have identified numerous issues with the design of the language. We then provide a suite of formal analysis tools derived directly from our semantics including an interpreter, a symbolic model checker, a deductive program verifier, and a program equivalence checker. Through a set of case studies, we demonstrate the use of our semantics beyond just a reference model for the language. This includes applications for the detection of unportable code, state-space exploration, search for bugs, full functional verification, and compiler translation validation
    corecore