1,548 research outputs found
A Program Logic for Verifying Secure Routing Protocols
The Internet, as it stands today, is highly vulnerable to attacks. However,
little has been done to understand and verify the formal security guarantees of
proposed secure inter-domain routing protocols, such as Secure BGP (S-BGP). In
this paper, we develop a sound program logic for SANDLog-a declarative
specification language for secure routing protocols for verifying properties of
these protocols. We prove invariant properties of SANDLog programs that run in
an adversarial environment. As a step towards automated verification, we
implement a verification condition generator (VCGen) to automatically extract
proof obligations. VCGen is integrated into a compiler for SANDLog that can
generate executable protocol implementations; and thus, both verification and
empirical evaluation of secure routing protocols can be carried out in this
unified framework. To validate our framework, we encoded several proposed
secure routing mechanisms in SANDLog, verified variants of path authenticity
properties by manually discharging the generated verification conditions in
Coq, and generated executable code based on SANDLog specification and ran the
code in simulation
Towards a Rigorous Methodology for Measuring Adoption of RPKI Route Validation and Filtering
A proposal to improve routing security---Route Origin Authorization
(ROA)---has been standardized. A ROA specifies which network is allowed to
announce a set of Internet destinations. While some networks now specify ROAs,
little is known about whether other networks check routes they receive against
these ROAs, a process known as Route Origin Validation (ROV). Which networks
blindly accept invalid routes? Which reject them outright? Which de-preference
them if alternatives exist?
Recent analysis attempts to use uncontrolled experiments to characterize ROV
adoption by comparing valid routes and invalid routes. However, we argue that
gaining a solid understanding of ROV adoption is impossible using currently
available data sets and techniques. Our measurements suggest that, although
some ISPs are not observed using invalid routes in uncontrolled experiments,
they are actually using different routes for (non-security) traffic engineering
purposes, without performing ROV. We conclude with a description of a
controlled, verifiable methodology for measuring ROV and present three ASes
that do implement ROV, confirmed by operators
CAIR: Using Formal Languages to Study Routing, Leaking, and Interception in BGP
The Internet routing protocol BGP expresses topological reachability and
policy-based decisions simultaneously in path vectors. A complete view on the
Internet backbone routing is given by the collection of all valid routes, which
is infeasible to obtain due to information hiding of BGP, the lack of
omnipresent collection points, and data complexity. Commonly, graph-based data
models are used to represent the Internet topology from a given set of BGP
routing tables but fall short of explaining policy contexts. As a consequence,
routing anomalies such as route leaks and interception attacks cannot be
explained with graphs.
In this paper, we use formal languages to represent the global routing system
in a rigorous model. Our CAIR framework translates BGP announcements into a
finite route language that allows for the incremental construction of minimal
route automata. CAIR preserves route diversity, is highly efficient, and
well-suited to monitor BGP path changes in real-time. We formally derive
implementable search patterns for route leaks and interception attacks. In
contrast to the state-of-the-art, we can detect these incidents. In practical
experiments, we analyze public BGP data over the last seven years
Diagnose network failures via data-plane analysis
Diagnosing problems in networks is a time-consuming and error-prone process. Previous tools to assist operators primarily focus on analyzing control
plane configuration. Configuration analysis is limited in that it cannot find
bugs in router software, and is harder to generalize across protocols since it
must model complex configuration languages and dynamic protocol behavior.
This paper studies an alternate approach: diagnosing problems through
static analysis of the data plane. This approach can catch bugs that are
invisible at the level of configuration files, and simplifies unified analysis of a
network across many protocols and implementations. We present Anteater, a
tool for checking invariants in the data plane. Anteater translates high-level
network invariants into boolean satisfiability problems, checks them against
network state using a SAT solver, and reports counterexamples if violations
have been found. Applied to a large campus network, Anteater revealed 23
bugs, including forwarding loops and stale ACL rules, with only five false
positives. Nine of these faults are being fixed by campus network operators
Recommended from our members
Multimedia delivery in the future internet
The term âNetworked Mediaâ implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizensâ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications âon the moveâ, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
A Survey on the Contributions of Software-Defined Networking to Traffic Engineering
Since the appearance of OpenFlow back in 2008, software-defined networking (SDN) has gained momentum. Although there are some discrepancies between the standards developing organizations working with SDN about what SDN is and how it is defined, they all outline traffic engineering (TE) as a key application. One of the most common objectives of TE is the congestion minimization, where techniques such as traffic splitting among multiple paths or advanced reservation systems are used. In such a scenario, this manuscript surveys the role of a comprehensive list of SDN protocols in TE solutions, in order to assess how these protocols can benefit TE. The SDN protocols have been categorized using the SDN architecture proposed by the open networking foundation, which differentiates among data-controller plane interfaces, application-controller plane interfaces, and management interfaces, in order to state how the interface type in which they operate influences TE. In addition, the impact of the SDN protocols on TE has been evaluated by comparing them with the path computation element (PCE)-based architecture. The PCE-based architecture has been selected to measure the impact of SDN on TE because it is the most novel TE architecture until the date, and because it already defines a set of metrics to measure the performance of TE solutions. We conclude that using the three types of interfaces simultaneously will result in more powerful and enhanced TE solutions, since they benefit TE in complementary ways.European Commission through the Horizon 2020 Research and Innovation Programme (GN4) under Grant 691567
Spanish Ministry of Economy and Competitiveness under the Secure Deployment of Services Over SDN and NFV-based Networks Project S&NSEC under Grant TEC2013-47960-C4-3-
- âŠ