187 research outputs found

    Decentralised Internet infrastructure: Securing inter-domain routing (DEMO)

    Get PDF
    The Border Gateway Protocol (BGP) is the inter-domain routing protocol that glues the Internet. BGP does not incorporate security and instead, it relies on careful configuration and manual filtering to offer some protection. As a consequence, the current inter-domain routing infrastructure is partially vulnerable to prefix and path hijacks as well as in misconfigurations that results in route leaks. There are many instances of these vulnerabilities being exploited by malicious actors on the Internet, resulting in disruption of services. To address this issue the IETF has designed RPKI, a centralised trust architecture that relies on Public Key Infrastructure. RPKI has slow adoption and its centralised nature is problematic: network administrators are required to trust CAs and do not have the ultimate control of their own critical Internet resources (e.g,. IP blocks, AS Numbers). In this context, we have built the Decentralised Internet Infrastructure (DII), a distributed ledger to securely store inter-domain routing information. The main advantages of DII are (i) it offers flexible trust models where the Internet community can define the rules of a consensus algorithm that properly reflects the power balance of its members and, (ii) offers protection against vulnerabilities (path hijack and route leaks) that goes well beyond what RPKI offers. We have deployed the prototype on the wild in a worldwide testbed including 7 ASes, we will use the testbed to demonstrate in a realistic scenario how allocation and delegation of Internet resources in DII work, and how this protects ASes against artificially produced path and prefix hijack as well as a route leak.This work was partially supported by the Spanish MINECO under contract TEC2017-90034-C2-1-R (ALLIANCE) and the Catalan Institution for Research and Advanced Studies (ICREA).Peer ReviewedPostprint (author's final draft

    An analytical model for Loc/ID mappings caches

    Get PDF
    Concerns regarding the scalability of the interdomain routing have encouraged researchers to start elaborating a more robust Internet architecture. While consensus on the exact form of the solution is yet to be found, the need for a semantic decoupling of a node's location and identity is generally accepted as a promising way forward. However, this typically requires the use of caches that store temporal bindings between the two namespaces, to avoid hampering router packet forwarding speeds. In this article, we propose a methodology for an analytical analysis of cache performance that relies on the working-set theory. We first identify the conditions that network traffic must comply with for the theory to be applicable and then develop a model that predicts average cache miss rates relying on easily measurable traffic parameters. We validate the result by emulation, using real packet traces collected at the egress points of a campus and an academic network. To prove its versatility, we extend the model to consider cache polluting user traffic and observe that simple, low intensity attacks drastically reduce performance, whereby manufacturers should either overprovision router memory or implement more complex cache eviction policies.Peer ReviewedPostprint (author's final draft

    Bayesian inference of spatial and temporal relations in AI patents for EU countries

    Get PDF
    In the paper, we propose two models of Artificial Intelligence (AI) patents in European Union (EU) countries addressing spatial and temporal behaviour. In particular, the models can quantitatively describe the interaction between countries or explain the rapidly growing trends in AI patents. For spatial analysis Poisson regression is used to explain collaboration between a pair of countries measured by the number of common patents. Through Bayesian inference, we estimated the strengths of interactions between countries in the EU and the rest of the world. In particular, a significant lack of cooperation has been identified for some pairs of countries. Alternatively, an inhomogeneous Poisson process combined with the logistic curve growth accurately models the temporal behaviour by an accurate trend line. Bayesian analysis in the time domain revealed an upcoming slowdown in patenting intensity.The research was supported in part by PL-Grid Infrastructure, POWER 2014–2020 program and the Polish Ministry of Science and Higher Education with the subvention funds of the Faculty of Computer Science, Electronics and Telecommunications of AGH University.Peer ReviewedPostprint (published version

    Decentralized trust in the inter-domain routing infrastructure

    Get PDF
    Inter-domain routing security is of critical importance to the Internet since it prevents unwanted traffic redirections. The current system is based on a Public Key Infrastructure (PKI), a centralized repository of digital certificates. However, the inherent centralization of such design creates tensions between its participants and hinders its deployment. In addition, some technical drawbacks of PKIs delay widespread adoption. In this paper we present IPchain, a blockchain to store the allocations and delegations of IP addresses. IPchain leverages blockchains' properties to decentralize trust among its participants, with the final goal of providing flexible trust models that adapt better to the ever-changing geopolitical landscape. Moreover, we argue that Proof of Stake is a suitable consensus algorithm for IPchain due to the unique incentive structure of this use-case, and that blockchains offer relevant technical advantages when compared to existing systems, such as simplified management. In order to show its feasibility and suitability, we have implemented and evaluated IPchain's performance and scalability storing around 350k IP prefixes in a 2.5 GB chain.Peer ReviewedPostprint (published version

    Global state, local decisions: Decentralized NFV for ISPs via enhanced SDN

    Get PDF
    The network functions virtualization paradigm is rapidly gaining interest among Internet service providers. However, the transition to this paradigm on ISP networks comes with a unique set of challenges: legacy equipment already in place, heterogeneous traffic from multiple clients, and very large scalability requirements. In this article we thoroughly analyze such challenges and discuss NFV design guidelines that address them efficiently. Particularly, we show that a decentralization of NFV control while maintaining global state improves scalability, offers better per-flow decisions and simplifies the implementation of virtual network functions. Building on top of such principles, we propose a partially decentralized NFV architecture enabled via an enhanced software-defined networking infrastructure. We also perform a qualitative analysis of the architecture to identify advantages and challenges. Finally, we determine the bottleneck component, based on the qualitative analysis, which we implement and benchmark in order to assess the feasibility of the architecture.Peer ReviewedPostprint (author's final draft

    Error analysis of programmable metasurfaces for beam steering

    Get PDF
    © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Recent years have seen the emergence of programmable metasurfaces, where the user can modify the electromagnetic (EM) response of the device via software. Adding reconfigurability to the already powerful EM capabilities of metasurfaces opens the door to novel cyber-physical systems with exciting applications in domains such as holography, cloaking, or wireless communications. This paradigm shift, however, comes with a non-trivial increase of the complexity of the metasurfaces that will pose new reliability challenges stemming from the need to integrate tuning, control, and communication resources to implement the programmability. While metasurfaces will become prone to failures, little is known about their tolerance to errors. To bridge this gap, this paper examines the reliability problem in programmable metamaterials by proposing an error model and a general methodology for error analysis. To derive the error model, the causes and potential impact of faults are identified and discussed qualitatively. The methodology is presented and exemplified for beam steering, which constitutes a relevant case for programmable metasurfaces. Results show that performance degradation depends on the type of error and its spatial distribution and that, in beam steering, error rates over 20% can still be considered acceptable.This work has been supported by the European Commission under grant H2020-FETOPEN-736876 (VISORSURF) and by ICREA under the ICREA Academia programme. The person and base station icons in Figure 1 were created by Jens Tärningand Clea Doltz from the Noun Project.Peer ReviewedPostprint (author's final draft

    Validation of a LISP simulator

    Get PDF
    We have developed a LISP simulator (CoreSim). CoreSim is an Internet-scale LISP deployment simulator. It is able to replay a packet trace and simulate the behavior of a LISP Ingress Tunnel Router (ITR) and the associated Mapping Resolver, on top of a topology based on measurements performed by the iPlane infrastructure. It reports mapping lookup latency, the load imposed on each node of the MS and cache performance statistics. The simulator implements LISP-ALT and LISP-DHT. In this technical report we validate our LISP-DHT implementation, present an estimator for the latencies not reported by iPlane and discuss the architecture of CoreSim.Preprin

    A passive available bandwidth estimation methodology

    Get PDF
    The Available Bandwidth (AB) of an end-to-end path is its remaining capacity and it is an important metric for several applications such as overlay routing and P2P networking. That is why many AB estimation tools have been published recently. Most of these tools use the Probe Rate Model, which requires sending packet trains at a rate matching the AB. Its main issue is that it congests the path under measurement. We present a different approach: a novel passive methodology to estimate the AB that does not introduce probe traffic. Our methodology, intended to be applied between two separate nodes, estimates the path’s AB by analyzing specific parameters of the traffic exchanged. The main challenge is that we cannot rely on any given rate of this traffic. Therefore we rely on a different model, the Utilization Model. In this paper we present our passive methodology and a tool (PKBest) based on it. We evaluate its applicability and accuracy using public NLANR data traces. Our results -more than 300Gb- show that our tool is more accurate than pathChirp, a state-of-the-art active PRM-based tool. At the best of the authors’ knowledge this is the first passive AB estimation methodology.Preprin

    Pulse interspersing in static multipath chip environments for Impulse Radio communications

    Get PDF
    Communications are becoming the bottleneck in the performance of Chip Multiprocessor (CMP). To address this issue, the use of wireless communications within a chip has been proposed, since they offer a low latency among nodes and high reconfigurability. The chip scenario has the particularity that is static, and the multipath can be known a priori. Within this context, we propose in this paper a simple yet very efficient modulation technique, based on Impulse Radio-On–Off-Keying (IR-OOK), which significantly optimizes the performance in Wireless Network-on-Chip (WNoC) as well as off-chip scenarios. This technique is based on interspersing information pulses among the reflected pulses in order to reduce the time between pulses, thus increasing the data rate. We prove that the final data rate can be considerably increased without increasing the hardware complexity of the transceiver.Peer ReviewedPostprint (published version

    WiSync: an architecture for fast synchronization through on-chip wireless communication

    Get PDF
    In shared-memory multiprocessing, fine-grain synchronization is challenging because it requires frequent communication. As technology scaling delivers larger manycore chips, such pattern is expected to remain costly to support.; In this paper, we propose to address this challenge by using on-chip wireless communication. Each core has a transceiver and an antenna to communicate with all the other cores. This environment supports very low latency global communication. Our architecture, called WiSync, uses a per-core Broadcast Memory (BM). When a core writes to its BM, all the other 100+ BMs get updated in less than 10 processor cycles. We also use a second wireless channel with cheaper transfers to execute barriers efficiently. WiSync supports multiprogramming, virtual memory, and context switching. Our evaluation with simulations of 128-threaded kernels and 64-threaded applications shows that WiSync speeds-up synchronization substantially. Compared to using advanced conventional synchronization, WiSync attains an average speedup of nearly one order of magnitude for the kernels, and 1.12 for PARSEC and SPLASH-2.Peer ReviewedPostprint (author's final draft
    • …
    corecore