13 research outputs found

    Verifiable Network-Performance Measurements

    Get PDF
    In the current Internet, there is no clean way for affected parties to react to poor forwarding performance: when a domain violates its Service Level Agreement (SLA) with a contractual partner, the partner must resort to ad-hoc probing-based monitoring to determine the existence and extent of the violation. Instead, we propose a new, systematic approach to the problem of forwarding-performance verification. Our mechanism relies on voluntary reporting, allowing each domain to disclose its loss and delay performance to its neighbors; it does not disclose any information regarding the participating domains' topology or routing policies beyond what is already publicly available. Most importantly, it enables verifiable performance measurements, i.e., domains cannot abuse it to significantly exaggerate their performance. Finally, our mechanism is tunable, allowing each participating domain to determine how many resources to devote to it independently (i.e., without any inter-domain coordination), exposing a controllable trade-off between performance-verification quality and resource consumption. Our mechanism comes at the cost of deploying modest functionality at the participating domains' border routers; we show that it requires reasonable processing and memory resources within modern network capabilities.Comment: 14 page

    FAIR: Forwarding Accountability for Internet Reputability

    Full text link
    This paper presents FAIR, a forwarding accountability mechanism that incentivizes ISPs to apply stricter security policies to their customers. The Autonomous System (AS) of the receiver specifies a traffic profile that the sender AS must adhere to. Transit ASes on the path mark packets. In case of traffic profile violations, the marked packets are used as a proof of misbehavior. FAIR introduces low bandwidth overhead and requires no per-packet and no per-flow state for forwarding. We describe integration with IP and demonstrate a software switch running on commodity hardware that can switch packets at a line rate of 120 Gbps, and can forward 140M minimum-sized packets per second, limited by the hardware I/O subsystem. Moreover, this paper proposes a "suspicious bit" for packet headers - an application that builds on top of FAIR's proofs of misbehavior and flags packets to warn other entities in the network.Comment: 16 pages, 12 figure

    Coordinated detection of forwarding faults in wireless community networks

    Get PDF
    Wireless Community Networks (WCN) are crowdsourced networks where equipment is contributed and managed by members from a community. WCN have three intrinsic characteristics that make forwarding faults more likely: inexpensive equipment, non-expert administration and openness. These characteristics hinder the robustness of network connectivity. We present KDet, a decentralized protocol for the detection of forwarding faults by establishing overlapping logical boundaries that monitor the behavior of the routers within them. KDet is designed to be collusion resistant, ensuring that compromised routers cannot cover for others to avoid detection. Another important characteristic of KDet is that it does not rely on path information: monitoring nodes do not have to know the complete path a packet follows, just the previous and next hop. As a result, KDet can be deployed as an independent daemon without imposing any change in the network, and it will bring improved network robustness. Results from theoretical analysis and simulation show the correctness of the algorithm, its accuracy in detecting forwarding faults, and a comparison in terms of cost and advantages over previous work, that confirms its practical feasibility in WCN.Peer ReviewedPostprint (author's final draft

    Coordinated sampling sans Origin-Destination identifiers: Algorithms and analysis

    Full text link
    Abstract—Flow monitoring is used for a wide range of network management applications. Many such applications require that the monitoring infrastructure provide high flow coverage and support fine-grained network-wide objectives. Coordinated Sampling (cSamp) is a recent proposal that improves the monitoring capabilities of ISPs to address these demands. In this paper, we address a key deployment impediment for cSamp-like solutions–the need for routers to determine the Origin-Destination (OD) pair of each packet. In practice, however, this information is not available without expensive changes. We present a new framework called cSamp-T, in which each router uses only local information, instead of the OD-pair identifiers. Leveraging results from the theory of maximizing submodular set functions, cSamp-T provides near-ideal performance in maximizing the total flow coverage in the network. Further, with a small amount of targeted upgrades to a few routers, cSamp-T nearly optimally maximizes the minimum fractional coverage across all OD-pairs. We demonstrate these results on a range of real topologies. I

    MinimaLT: Minimal-latency Networking Through Better Security

    Get PDF
    Minimal Latency Tunneling (MinimaLT) is a new network protocol that provides ubiquitous encryption for maximal confidentiality, including protecting packet headers. MinimaLT provides server and user authentication, extensive Denial-of-Service protections, privacy-preserving IP mobility, and fast key erasure. We describe the protocol, demonstrate its performance relative to TLS and unencrypted TCP/IP, and analyze its protections, including its resilience against DoS attacks. By exploiting the properties of its cryptographic protections, MinimaLT is able to eliminate three-way handshakes and thus create connections faster than unencrypted TCP/IP

    Forwarding fault detection in wireless community networks

    Get PDF
    Wireless community networks (WCN) are specially vulnerable to routing forwarding failures because of their intrinsic characteristics: use of inexpensive hardware that can be easily accessed; managed in a decentralized way, sometimes by non-expert administrators, and open to everyone; making it prone to hardware failures, misconfigurations and malicious attacks. To increase routing robustness in WCN, we propose a detection mechanism to detect faulty routers, so that the problem can be tackled. Forwarding fault detection can be explained as a 4 steps process: first, there is the need of monitoring and summarizing the traffic observed; then, the traffic summaries are shared among peers, so that evaluation of a router's behavior can be done by analyzing all the relevant traffic summaries; finally, once the faulty nodes have been detected a response mechanism is triggered to solve the issue. The contributions of this thesis focus on the first three steps of this process, providing solutions adapted to Wireless Community Networks that can be deployed without the need of modifying its current network stack. First, we study and characterize the distribution of the error of sketches, a traffic summary function that is resilient to packet dropping, modification and creation and provides better estimations than sampling. We define a random process to describe the estimation for each sketch type, which allows us to provide tighter bounds on the sketch accuracy and choose the size of the sketch more accurately for a set of given requirements on the estimation accuracy. Second, we propose KDet, a traffic summary dissemination and detection protocol that, unlike previous solutions, is resilient to collusion and false accusation without the need of knowing a packet's path. Finally, we consider the case of nodes with unsynchronized clocks and we propose a traffic validation mechanism based on sketches that is capable of discerning between faulty and non-faulty nodes even when the traffic summaries are misaligned, i.e. they refer to slightly different intervals of time.Las redes comunitarias son especialmente vulnerables a errores en la retransmisión de paquetes de red, puesto que están formadas por equipos de gama baja, que pueden ser fácilmente accedidos por extraños; están gestionados de manera distribuida y no siempre por expertos, y además están abiertas a todo el mundo; con lo que de manera habitual presentan errores de hardware o configuración y son sensibles a ataques maliciosos. Para mejorar la robustez en el enrutamiento en estas redes, proponemos el uso de un mecanismo de detección de routers defectuosos, para así poder corregir el problema. La detección de fallos de enrutamiento se puede explicar como un proceso de 4 pasos: el primero es monitorizar el tráfico existente, manteniendo desde cada punto de observación un resumen sobre el tráfico observado; después, estos resumenes se comparten entre los diferentes nodos, para que podamos llevar a cabo el siguiente paso: la evaluación del comportamiento de cada nodo. Finalmente, una vez hemos detectado los nodos maliciosos o que fallan, debemos actuar con un mecanismo de respuesta que corrija el problema. Esta tesis se concentra en los tres primeros pasos, y proponemos una solución para cada uno de ellos que se adapta al contexto de las redes comunitarias, de tal manera que se puede desplegar en ellas sin la necesidad de modificar los sistemas y protocolos de red ya existentes. Respecto a los resumenes de tráfico, presentamos un estudio y caracterización de la distribución de error de los sketches, una estructura de datos que es capaz de resumir flujos de tráfico resistente a la pérdida, manipulación y creación de paquetes y que además tiene mejor resolución que el muestreo. Para cada tipo de sketch, definimos una función de distribución que caracteriza el error cometido, de esta manera somos capaces de determinar con más precisión el tamaño del sketch requerido bajo unos requisitos de falsos positivos y negativos. Después proponemos KDet, un protocolo de diseminación de resumenes de tráfico y detección de nodos erróneos que, a diferencia de protocolos propuestos anteriormente, no require conocer el camino de cada paquete y es resistente a la confabulación de nodos maliciosos. Por último, consideramos el caso de nodos con relojes desincronizados, y proponemos un mecanismo de detección basado en sketches, capaz de discernir entre los nodos erróneos y correctos, aún a pesar del desalineamiento de los sketches (es decir, a pesar del que estos se refieran a momentos de tiempo ligeramente diferentes)

    Retroactive Packet Sampling for Traffic Receipts

    Get PDF
    Is it possible to design a packet-sampling algorithm that prevents the network node that performs the sampling from treating the sampled packets preferentially? We study this problem in the context of designing a "network transparency" system. In this system, networks emit receipts for a small sample of the packets they observe, and a monitor collects these receipts to estimate each network's loss and delay performance. Sampling is a good building block for this system, because it enables a solution that is flexible and combines low resource cost with quantifiable accuracy. The challenge is cheating resistance: when a network's performance is assessed based on the conditions experienced by a small traffic sample, the network has a strong incentive to treat the sampled packets better than the rest. We contribute a sampling algorithm that is provably robust to such prioritization attacks, enables network performance estimation with quantifiable accuracy, and requires minimal resources. We confirm our analysis using real traffic traces

    Re-feedback: freedom with accountability for causing congestion in a connectionless internetwork

    Get PDF
    This dissertation concerns adding resource accountability to a simplex internetwork such as the Internet, with only necessary but sufficient constraint on freedom. That is, both freedom for applications to evolve new innovative behaviours while still responding responsibly to congestion; and freedom for network providers to structure their pricing in any way, including flat pricing. The big idea on which the research is built is a novel feedback arrangement termed ‘re-feedback’. A general form is defined, as well as a specific proposal (re-ECN) to alter the Internet protocol so that self-contained datagrams carry a metric of expected downstream congestion. Congestion is chosen because of its central economic role as the marginal cost of network usage. The aim is to ensure Internet resource allocation can be controlled either by local policies or by market selection (or indeed local lack of any control). The current Internet architecture is designed to only reveal path congestion to end-points, not networks. The collective actions of self-interested consumers and providers should drive Internet resource allocations towards maximisation of total social welfare. But without visibility of a cost-metric, network operators are violating the architecture to improve their customer’s experience. The resulting fight against the architecture is destroying the Internet’s simplicity and ability to evolve. Although accountability with freedom is the goal, the focus is the congestion metric, and whether an incentive system is possible that assures its integrity as it is passed between parties around the system, despite proposed attacks motivated by self-interest and malice. This dissertation defines the protocol and canonical examples of accountability mechanisms. Designs are all derived from carefully motivated principles. The resulting system is evaluated by analysis and simulation against the constraints and principles originally set. The mechanisms are proven to be agnostic to specific transport behaviours, but they could not be made flow-ID-oblivious
    corecore