229 research outputs found
Performance Evaluation of Distributed Security Protocols Using Discrete Event Simulation
The Border Gateway Protocol (BGP) that manages inter-domain routing on the Internet lacks security. Protective measures using public key cryptography introduce complexities and costs. To support authentication and other security functionality in large networks, we need public key infrastructures (PKIs). Protocols that distribute and validate certificates introduce additional complexities and costs. The certification path building algorithm that helps users establish trust on certificates in the distributed network environment is particularly complicated. Neither routing security nor PKI come for free. Prior to this work, the research study on performance issues of these large-scale distributed security systems was minimal. In this thesis, we evaluate the performance of BGP security protocols and PKI systems. We answer the questions about how the performance affects protocol behaviors and how we can improve the efficiency of these distributed protocols to bring them one step closer to reality. The complexity of the Internet makes an analytical approach difficult; and the scale of Internet makes empirical approaches also unworkable. Consequently, we take the approach of simulation. We have built the simulation frameworks to model a number of BGP security protocols and the PKI system. We have identified performance problems of Secure BGP (S-BGP), a primary BGP security protocol, and proposed and evaluated Signature Amortization (S-A) and Aggregated Path Authentication (APA) schemes that significantly improve efficiency of S-BGP without compromising security. We have also built a simulation framework for general PKI systems and evaluated certification path building algorithms, a critical part of establishing trust in Internet-scale PKI, and used this framework to improve algorithm performance
Secure Inter-domain Routing and Forwarding via Verifiable Forwarding Commitments
The Internet inter-domain routing system is vulnerable. On the control plane,
the de facto Border Gateway Protocol (BGP) does not have built-in mechanisms to
authenticate routing announcements, so an adversary can announce virtually
arbitrary paths to hijack network traffic; on the data plane, it is difficult
to ensure that actual forwarding path complies with the control plane
decisions. The community has proposed significant research to secure the
routing system. Yet, existing secure BGP protocols (e.g., BGPsec) are not
incrementally deployable, and existing path authorization protocols are not
compatible with the current Internet routing infrastructure. In this paper, we
propose FC-BGP, the first secure Internet inter-domain routing system that can
simultaneously authenticate BGP announcements and validate data plane
forwarding in an efficient and incrementally-deployable manner. FC-BGP is built
upon a novel primitive, name Forwarding Commitment, to certify an AS's routing
intent on its directly connected hops. We analyze the security benefits of
FC-BGP in the Internet at different deployment rates. Further, we implement a
prototype of FC-BGP and extensively evaluate it over a large-scale overlay
network with 100 virtual machines deployed globally. The results demonstrate
that FC-BGP saves roughly 55% of the overhead required to validate BGP
announcements compared with BGPsec, and meanwhile FC-BGP introduces a small
overhead for building a globally-consistent view on the desirable forwarding
paths.Comment: 16 pages, 17 figure
Gateway Architectures for Interaction between the Current Internet and Future Internet Architectures
In this project, we design, analyze, and implement a gateway for the SCION secure Internet architecture. This enables communication between legacy IP hosts and SCION hosts, and enables legacy IP traffic to be encapsulated and transported over the SCION network. We also analyze the security implications/benefits for legacy traffic to be interfaced with SCION and how the SCION Gateway can provide DDoS defense properties for the legacy hosts it serves, without requiring any infrastructure chang
Internet of Things From Hype to Reality
The Internet of Things (IoT) has gained significant mindshare, let alone attention, in academia and the industry especially over the past few years. The reasons behind this interest are the potential capabilities that IoT promises to offer. On the personal level, it paints a picture of a future world where all the things in our ambient environment are connected to the Internet and seamlessly communicate with each other to operate intelligently. The ultimate goal is to enable objects around us to efficiently sense our surroundings, inexpensively communicate, and ultimately create a better environment for us: one where everyday objects act based on what we need and like without explicit instructions
State-of-the-Art Multihoming Protocols and Support for Android
Il traguardo più importante per la connettività wireless del futuro sarà sfruttare appieno le potenzialità offerte da tutte le interfacce di rete dei dispositivi mobili. Per questo motivo con ogni probabilità il multihoming sarà un requisito obbligatorio per quelle applicazioni che puntano a fornire la migliore esperienza utente nel loro utilizzo. Sinteticamente è possibile definire il multihoming come quel processo complesso per cui un end-host o un end-site ha molteplici punti di aggancio alla rete. Nella pratica, tuttavia, il multihoming si è rivelato difficile da implementare e ancor di più da ottimizzare.
Ad oggi infatti, il multihoming è lontano dall’essere considerato una feature standard nel network deployment nonostante anni di ricerche e di sviluppo nel settore, poiché il relativo supporto da parte dei protocolli è quasi sempre del tutto inadeguato.
Naturalmente anche per Android in quanto piattaforma mobile più usata al mondo, è di fondamentale importanza supportare il multihoming per ampliare lo spettro delle funzionalità offerte ai propri utenti. Dunque alla luce di ciò, in questa tesi espongo lo stato dell’arte del supporto al multihoming in Android mettendo a confronto diversi protocolli di rete e testando la soluzione che sembra essere in assoluto la più promettente: LISP.
Esaminato lo stato dell’arte dei protocolli con supporto al multihoming e l’architettura software di LISPmob per Android, l’obiettivo operativo principale di questa ricerca è duplice: a) testare il roaming seamless tra le varie interfacce di rete di un dispositivo Android, il che è appunto uno degli obiettivi del multihoming, attraverso LISPmob; e b) effettuare un ampio numero di test al fine di ottenere attraverso dati sperimentali alcuni importanti parametri relativi alle performance di LISP per capire quanto è realistica la possibilità da parte dell’utente finale di usarlo come efficace soluzione multihoming
Development of a secure monitoring framework for optical disaggregated data centres
Data center (DC) infrastructures are a key piece of nowadays telecom and cloud services delivery, enabling the access and storage of enormous quantities of information as well as the execution of complex applications and services. Such aspect is being accentuated with the advent of 5G and beyond architectures, since a significant portion of the network and service functions are being deployed as specialized virtual elements inside dedicated DC infrastructures. As such, the development of new architectures to better exploit the resources of DC becomes of paramount importanceThe mismatch between the variability of resources required by running applications and the fixed amount of resources in server units severely limits resource utilization in today's Data Centers (DCs). The Disaggregated DC (DDC) paradigm was recently introduced to address these limitations. The main idea behind DDCs is to divide the various computational resources into independent hardware modules/blades, which are mounted in racks, bringing greater modularity and allowing operators to optimize their deployments for improved efficiency and performance, thus, offering high resource allocation flexibility. Moreover, to efficiently exploit the hardware blades and establish the connections across them according to upper layer requirements, a flexible control and management framework is required. In this regard, following current industrial trends, the Software Defined Networking (SDN) paradigm is one of the leading technologies for the control of DC infrastructures, allowing for the establishment of high-speed, low-latency optical connections between hardware components in DDCs in response to the demands of higher-level services and applications. With these concepts in mind, the primary objective of this thesis is to design and carry out the implementation of the control of a DDC infrastructure layer that is founded on the SDN principles and makes use of optical technologies for the intra-DC network fabric, highlighting the importance of quality control and monitoring. Thanks to several SDN agents, it becomes possible to gather statistics and metrics from the multiple infrastructure elements (computational blades and network equipment), allowing DC operators to monitor and make informed decisions on how to utilize the infrastructure resources to the greatest extent feasible. Indeed, quality assurance operations are of capital importance in modern DC infrastructures, thus, it becomes essential to guarantee a secure communication channel for gathering infrastructure metrics/statistics and enforcing (re-)configurations, closing the full loop, then addressing the security layer to secure the communication channel by encryption and providing authentication for the server and the client
Naming and discovery in networks : architecture and economics
In less than three decades, the Internet was transformed from a research network available to the academic community into an international communication infrastructure. Despite its tremendous success, there is a growing consensus in the research community that the Internet has architectural limitations that need to be addressed in a effort to design a future Internet. Among the main technical limitations are the lack of mobility support, and the lack of security and trust. The Internet, and particularly TCP/IP, identifies endpoints using a location/routing identifier, the IP address. Coupling the endpoint identifier to the location identifier hinders mobility and poorly identifies the actual endpoint. On the other hand, the lack of security has been attributed to limitations in both the network and the endpoint. Authentication for example is one of the main concerns in the architecture and is hard to implement partly due to lack of identity support. The general problem that this dissertation is concerned with is that of designing a future Internet. Towards this end, we focus on two specific sub-problems. The first problem is the lack of a framework for thinking about architectures and their design implications. It was obvious after surveying the literature that the majority of the architectural work remains idiosyncratic and descriptions of network architectures are mostly idiomatic. This has led to the overloading of architectural terms, and to the emergence of a large body of network architecture proposals with no clear understanding of their cross similarities, compatibility points, their unique properties, and architectural performance and soundness. On the other hand, the second problem concerns the limitations of traditional naming and discovery schemes in terms of service differentiation and economic incentives. One of the recurring themes in the community is the need to separate an entity\u27s identifier from its locator to enhance mobility and security. Separation of identifier and locator is a widely accepted design principle for a future Internet. Separation however requires a process to translate from the identifier to the locator when discovering a network path to some identified entity. We refer to this process as identifier-based discovery, or simply discovery, and we recognize two limitations that are inherent in the design of traditional discovery schemes. The first limitation is the homogeneity of the service where all entities are assumed to have the same discovery performance requirements. The second limitation is the inherent incentive mismatch as it relates to sharing the cost of discovery. This dissertation addresses both subproblems, the architectural framework as well as the naming and discovery limitations
Recommended from our members
Global Data Plane: A Widely Distributed Storage and Communication Infrastructure
With the advancement of technology, richer computation devices are making their way into everyday life. However, such smarter devices merely act as a source and sink of information; the storage of information is highly centralized in data-centers in today’s world. Even though such data-centers allow for amortization of cost per bit of information, the density and distribution of such data-centers is not necessarily representative of human population density. This disparity of where the information is produced and consumed vs where it is stored only slightly affects the applications of today, but it will be the limiting factor for applications of tomorrow.The computation resources at the edge are more powerful than ever, and present an opportunity to address this disparity. We envision that a seamless combination of these edge-resources with the data-center resources is the way forward. However, the resulting issues of trust and data-security are not easy to solve in a world full of complexity. Toward this vision of a federated infrastructure composed of resources at the edge as well as those in data-centers, we describe the architecture and design of a widely distributed system for data storage and communication that attempts to alleviate some of these data security challenges; we call this system the Global Data Plane (GDP).The key abstraction in the GDP is a secure cohesive container of information called a DataCapsule, which provides a layer of uniformity on top of a heterogeneous infrastructure. A DataCapsule represents a secure history of transactions in a persistent form that can be used for building other applications on top. Existing applications can be refactored to use DataCapsules as the ground truth of persistent state; such a refactoring enables cleaner application design that allows for better security analysis of information flows. Not only cleaner design, the GDP also enables locality of access for performance and data privacy—an ever growing concern in the information age.The DataCapsules are enabled by an underlying routing fabric, called the GDP network, which provides secure routing for datagrams in a flat namespace. The GDP network is a core component of the GDP that enables various GDP components to interact with each other. In addition to the DataCapsules, this underlying network is available to applications for native communication as well. Flat namespace networks are known to provide a number of desirable properties, such as location independence, built-in multicast, etc. However, existing architectures for such networks suffer from routing security issues, typically because malicious entities can claim to possess arbitrary names and thus, receive traffic intended for arbitrary destinations. GDP network takes a different approach by defining an ownership of the name and the associated mechanisms for participants to delegate routing for such names to others. By directly integrating with GDP network, applications can enjoy the benefits of flat namespace networks without compromising routing security.The Global Data Plane and DataCapsules together represent our vision for secure ubiquitous storage. As opposed to the current approach of perimeter security for infrastructure, i.e. drawing a perimeter around parts of infrastructure and trusting everything inside it, our vision is to use cryptographic tools to enable intrinsic security for the information itself regardless of the context in which such information lives. In this dissertation, we show how to make this vision a reality, and how to adapt real world applications to reap the benefits of secure ubiquitous storage
UCLP in flow state router platforms
Actualment, els serveis òptics són essencialment està tics, en els que els
usuaris sol•liciten als proveïdors o ISPs l’ample de banda que necessiten
durant un cert perĂode de temps. Aquest serveis s’aprovisionen de forma
manual, la qual cosa pot suposar un treball llarg i costĂłs. Els usuaris o les
mateixes aplicacions necessiten flexibilitat per controlar els seus serveis al
llarg de diferents dominis independents, ja que estan en millor posiciĂł que els
proveïdors per escollir i gestionar camins òptics adaptats a les seves
necessitats. Aquesta idea ha donat lloc a un nou paradigma en el mon de les
xarxes anomenat “xarxes controlades per usuaris”.
Amb el patrocini de Canarie, s’està duent a terme una investigació per tal
d’aconseguir les xarxes controlades pels usuaris. D’aquesta investigació ha
sorgit un sistema anomenat "User-Controlled Lightpath Provisioning" (UCLP)
(Provisió de camins òptics controlats per usuaris) que permet als usuaris
establir canals òptics d’extrem a extrem a través de diferents Sistemes
Autònoms. Aquest software s’està desenvolupant al CRC (Communications
Research Centre, Canada) amb la col•laboració de la Fundació i2Cat.
UCLP es un sistema de gestiĂł distribuĂŻt que utilitza la idea de OON (Objecte
Orientat a Xarxa) i que es pot explicar com una eina de particiĂł i configuraciĂł
que representa cada recurs d’una xarxa fĂsica (fibres, targetes) com un servei
o objecte. Aquest servei/objecte es pot posar sota el control de diversos
usuaris de la xarxa per tal que puguin crear les seves pròpies topologies de
xarxa IP.
Fins ara, UCLP treballava amb equips de capa 1 i 2. El principal objectiu
d’aquest projecte és trobar una solució per integrar equips de capa 3 al
sistema. Concretament, es vol introduir un router basat en una nova tecnologia
anomenada “estat de flux” (flow-state). Aquesta tecnologia, que permet
reconèixer fluxos basats en determinades funcions de hash, realitza un extens
procés al primer paquet d’un flux, associa aquest flux amb un estat i aplica el
resultat d’aquest procés als següents paquets del flux, amb els quals, enlloc de
realitzar enrutament, simplement els reenviarĂ sense mirar les taules
d’enrutament
Department of Computer Science Activity 1998-2004
This report summarizes much of the research and teaching activity of the Department of Computer Science at Dartmouth College between late 1998 and late 2004. The material for this report was collected as part of the final report for NSF Institutional Infrastructure award EIA-9802068, which funded equipment and technical staff during that six-year period. This equipment and staff supported essentially all of the department\u27s research activity during that period
- …