234 research outputs found

    Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure

    Get PDF
    This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version

    A Broadband Access Market Framework: Towards Consumer Service Level Agreements

    Get PDF
    Ubiquitous broadband access is considered by many to be necessary for the Internet to realize its full potential. But there is no generally accepted definition of what constitutes broadband access. Furthermore, there is only limited understanding of how the quality of end-to-end broadband Internet services might be assured in today?s nascent multi-service, multi-provider environment. The absence of generally accepted and standardized service definitions and mechanisms for assuring service quality is a significant barrier to competitive broadband access markets. In the business data services market and in the core of the Internet, this problem has been addressed, in part, by increased reliance on Service Level Agreements (SLAs). These SLAs provide a mechanism for service providers and customers to flexibly specify the quality of service (QoS) that will be delivered. When used in conjunction with the new standards-based technical solutions for implementing QoS, these SLAs are helping to facilitate the development of robust wholesale markets for backbone transport services and content delivery services for commercial customers. The emergence of bandwidth traders, brokers, and exchanges provide an institutional and market-based framework to support effective competition

    Show Me the Money: Contracts and Agents in the Service Level Agreement Markets

    Get PDF
    Delivering real-time services (Internet telephony, video conferencing, and streaming media as well as business-critical data applications) across the Internet requires end-to-end quality of service (QoS) guarantees, which requires a hierarchy of contracts. These standardized contracts may be referred to as Service Level Agreements (SLAs). SLAs provide a mechanism for service providers and customers to flexibly specify the service to be delivered. The emergence of bandwidth and service agents, traders, brokers, exchanges and contracts can provide an institutional and business framework to support effective competition. This article identifies issues that must be addressed by SLAs for consumer applications. We introduce a simple taxonomy for classifying SLAs based on the identity of the contracting parties. We conclude by discussing implications for public policy, Internet architecture, and competition

    Performance evaluation of HIP-based network security solutions

    Get PDF
    Abstract. Host Identity Protocol (HIP) is a networking technology that systematically separates the identifier and locator roles of IP addresses and introduces a Host Identity (HI) name space based on a public key security infrastructure. This modification offers a series of benefits such as mobility, multi-homing, end-to-end security, signaling, control/data plane separation, firewall security, e.t.c. Although HIP has not yet been sufficiently applied in mainstream communication networks, industry experts foresee its potential as an integral part of next generation networks. HIP can be used in various HIP-aware applications as well as in traditional IP-address-based applications and networking technologies, taking middle boxes into account. One of such applications is in Virtual Private LAN Service (VPLS), VPLS is a widely used method of providing Ethernet-based Virtual Private Network that supports the connection of geographically separated sites into a single bridged domain over an IP/MPLS network. The popularity of VPLS among commercial and defense organizations underscores the need for robust security features to protect both data and control information. After investigating the different approaches to HIP, a real world testbed is implemented. Two experiment scenarios were evaluated, one is performed on two open source Linux-based HIP implementations (HIPL and OpenHIP) and the other on two sets of enterprise equipment from two different companies (Tempered Networks and Byres Security). To account for a heterogeneous mix of network types, the Open source HIP implementations were evaluated on different network environments, namely Local Area Network (LAN), Wireless LAN (WLAN), and Wide Area Network (WAN). Each scenario is tested and evaluated for performance in terms of throughput, latency, and jitter. The measurement results confirmed the assumption that no single solution is optimal in all considered aspects and scenarios. For instance, in the open source implementations, the performance penalty of security on TCP throughput for WLAN scenario is less in HIPL than in OpenHIP, while for WAN scenario the reverse is the case. A similar outcome is observed for the UDP throughput. However, on latency, HIPL showed lower latency for all three network test scenarios. For the legacy equipment experiment, the penalty of security on TCP throughput is about 19% compared with the non-secure scenario while latency is increased by about 87%. This work therefore provides viable information for researchers and decision makers on the optimal solution to securing their VPNs based on the application scenarios and the potential performance penalties that come with each approach.HIP-pohjaisten tietoliikenneverkkojen turvallisuusratkaisujen suorituskyvyn arviointi. Tiivistelmä. Koneen identiteettiprotokolla (HIP, Host Identity Protocol) on tietoliikenneverkkoteknologia, joka käyttää erillistä kerrosta kuljetusprotokollan ja Internet-protokollan (IP) välissä TCP/IP-protokollapinossa. HIP erottaa systemaattisesti IP-osoitteen verkko- ja laite-osat, sekä käyttää koneen identiteetti (HI) -osaa perustuen julkisen avainnuksen turvallisuusrakenteeseen. Tämän hyötyjä ovat esimerkiksi mobiliteetti, moniliittyminen, päästä päähän (end-to-end) turvallisuus, kontrolli-informaation ja datan erottelu, kohtaaminen, osoitteenmuutos sekä palomuurin turvallisuus. Teollisuudessa HIP-protokolla nähdään osana seuraavan sukupolven tietoliikenneverkkoja, vaikka se ei vielä olekaan yleistynyt laajaan kaupalliseen käyttöön. HIP–protokollaa voidaan käyttää paitsi erilaisissa HIP-tietoisissa, myös perinteisissä IP-osoitteeseen perustuvissa sovelluksissa ja verkkoteknologioissa. Eräs tällainen sovellus on virtuaalinen LAN-erillisverkko (VPLS), joka on laajasti käytössä oleva menetelmä Ethernet-pohjaisen, erillisten yksikköjen ja yhden sillan välistä yhteyttä tukevan, virtuaalisen erillisverkon luomiseen IP/MPLS-verkon yli. VPLS:n yleisyys sekä kaupallisissa- että puolustusorganisaatioissa korostaa vastustuskykyisten turvallisuusominaisuuksien tarpeellisuutta tiedon ja kontrolliinformaation suojauksessa. Tässä työssä tutkitaan aluksi HIP-protokollan erilaisia lähestymistapoja. Teoreettisen tarkastelun jälkeen käytännön testejä suoritetaan itse rakennetulla testipenkillä. Tarkasteltavat skenaariot ovat verrata Linux-pohjaisia avoimen lähdekoodin HIP-implementaatioita (HIPL ja OpenHIP) sekä verrata kahden eri valmistajan laitteita (Tempered Networks ja Byres Security). HIP-implementaatiot arvioidaan eri verkkoympäristöissä, jota ovat LAN, WLAN sekä WAN. Kaikki testatut tapaukset arvioidaan tiedonsiirtonopeuden, sen vaihtelun (jitter) sekä latenssin perusteella. Mittaustulokset osoittavat, että sama ratkaisu ei ole optimaalinen kaikissa tarkastelluissa tapauksissa. Esimerkiksi WLAN-verkkoa käytettäessä turvallisuuden aiheuttama häviö tiedonsiirtonopeudessa on HIPL:n tapauksessa OpenHIP:iä pirnempi, kun taas WAN-verkon tapauksessa tilanne on toisinpäin. Samanlaista käyttäytymistä havaitaan myös UDP-tiedonsiirtonopeudessa. HIPL antaa kuitenkin pienimmän latenssin kaikissa testiskenaarioissa. Eri valmistajien laitteita vertailtaessa huomataan, että TCP-tiedonsiirtonopeus huononee 19 ja latenssi 87 prosenttia verrattuna tapaukseen, jossa turvallisuusratkaisua ei käytetä. Näin ollen tämän työn tuottama tärkeä tieto voi auttaa alan toimijoita optimaalisen verkkoturvallisuusratkaisun löytämisessä VPN-pohjaisiin sovelluksiin

    Enterprise network convergence: path to cost optimization

    Get PDF
    During the past two decades, telecommunications has evolved a great deal. In the eighties, people were using television, radio and telephone as their communication systems. Eventually, the introduction of the Internet and the WWW immensely transformed the telecommunications industry. This internet revolution brought about a huge change in the way businesses communicated and operated. Enterprise networks now had an increasing demand for more bandwidth as they started to embrace newer technologies. The requirements of the enterprise networks grew as the applications and services that were used in the network expanded. This stipulation for fast and high performance communication systems has now led to the emergence of converged network solutions. Enterprises across the globe are investigating new ways to implement voice, video, and data over a single network for various reasons – to optimize network costs, to restructure their communication system, to extend next generation networking abilities, or to bridge the gap between their corporate network and the existing technological progress. To date, organizations had multiple network services to support a range of communication needs. Investing in this type of multiple communication infrastructures limits the networks ability to provide resourceful bandwidth optimization services throughout the system. Thus, as the requirements for the corporate networks to handle dynamic traffic grow day by day, the need for a more effective and efficient network arises. A converged network is the solution for enterprises aspiring to employ advanced applications and innovative services. This thesis will emphasize the importance of converging network infrastructure and prove that it leads to cost savings. It discusses the characteristics, architecture, and relevant protocols of the voice, data and video traffic over both traditional infrastructure and converged architecture. While IP-based networks present excellent quality for non real-time data networking, the network by itself is not capable of providing reliable, quality and secure services for real-time traffic. In order for IP networks to perform reliable and timely transmission of real-time data, additional mechanisms to reduce delay, jitter and packet loss are required. Therefore, this thesis will also discuss the important mechanisms for running real-time traffic like voice and video over an IP network. Lastly, it will also provide an example of an enterprise network specifications (voice, video and data), and present an in depth cost analysis of a typical network vs. a converged network to prove that converged infrastructures provide significant savings

    Simultaneous Implementation Of Ssl And Ipsec Protocols For Remote Vpn Connection

    Get PDF
    A Virtual Private Network is a wide spread technology for connecting remote users and locations to the main core network. It has number of benefits such as cost-efficiency and security. SSL and IPSec are the most popular VPN protocols employed by large number of organizations. Each protocol has its benefits and disadvantages. Simultaneous SSL and IPSec implementation delivers efficient and flexible solution for companies’ with heterogeneous remote connection needs. On the other hand, employing two different VPN technologies opens questions about compatibility, performance, and drawbacks especially if they are utilized by one network device. The study examines the behavior of the two VPN protocols implemented in one edge network device, ASA 5510 security appliance. It follows the configuration process as well as the effect of the VPN protocols on the ASA performance including routing functions, firewall access lists, and network address translation abilities. The paper also presents the cost effect and the maintenance requirements for utilizing SSL and IPSec in one edge network security devic

    Analyzing challenging aspects of IPv6 over IPv4

    Get PDF
    The exponential expansion of the Internet has exhausted the IPv4 addresses provided by IANA. The new IP edition, i.e. IPv6 introduced by IETF with new features such as a simplified packet header, a greater address space, a different address sort, improved encryption, powerful section routing, and stronger QoS. ISPs are slowly seeking to migrate from current IPv4 physical networks to new generation IPv6 networks. ‎The move from actual IPv4 to software-based IPv6 is very sluggish, since billions of computers across the globe use IPv4 addresses. The configuration and actions of IP4 and IPv6 protocols are distinct. Direct correspondence between IPv4 and IPv6 is also not feasible. In terms of the incompatibility problems, all protocols can co-exist throughout the transformation for a few years. Compatibility, interoperability, and stability are key concerns between IP4 and IPv6 protocols. After the conversion of the network through an IPv6, the move causes several issues for ISPs. The key challenges faced by ISPs are packet traversing, routing scalability, performance reliability, and protection. Within this study, we meticulously analyzed a detailed overview of all aforementioned issues during switching into ipv6 network

    Resource management for virtualized networks

    Get PDF
    Network Virtualization has emerged as a promising approach that can be employed to efficiently enhance the resource management technologies. In this work, the goal is to study how to automate the bandwidth resource management, while deploying a virtual partitioning scheme for the network bandwidth resources. Works that addressed the resource management in Virtual Networks are many, however, each has some limitations. Resource overwhelming, poor bandwidth utilization, low profits, exaggeration, and collusion are types of such sort of limitations. Indeed, the lack of adequate bandwidth allocation schemes encourages resource overwhelming, where one customer may overwhelm the resources that supposed to serve others. Static resource partitioning can resist overwhelming but at the same time it may result in poor bandwidth utilization, which means less profit rates for the Internet Service Providers (ISPs). However, deploying the technology of autonomic management can enhance the resource utilization, and maximize the customers’ satisfaction rates. It also provides the customers with a kind of privilege that should be somehow controlled as customers, always eager to maximize their payoffs, can use such a privilege to cheat. Hence, cheating actions like exaggeration and collusion can be expected. Solving the aforementioned limitations is addressed in this work. In the first part, the work deals with overcoming the problems of low profits, poor utilization, and high blocking ratios of the traditional First Ask First Allocate (FAFA) algorithm. The proposed solution is based on an Autonomic Resource Management Mechanism (ARMM). This solution deploys a smarter allocation algorithm based on the auction mechanism. At this level, to reduce the tendency of exaggeration, the Vickrey-Clarke-Groves (VCG) is proposed to provide a threat model that penalizes the exaggerating customers, based on the inconvenience they cause to others in the system. To resist the collusion, the state-dependent shadow price is calculated, based on the Markov decision theory, to represent a selling price threshold for the bandwidth units at a given state. Part two of the work solves an expanded version of the bandwidth allocation problem, but through a different methodology. In this part, the bandwidth allocation problem is expanded to a bandwidth partitioning problem. Such expansion allows dividing the link’s bandwidth resources based on the provided Quality of Service (QoS) classes, which provides better bandwidth utilization. In order to find the optimal management metrics, the problem is solved through Linear Programming (LP). A dynamic bandwidth partitioning scheme is also proposed to overcome the problems related to the static partitioning schemes, such as the poor bandwidth utilization, which can result in having under-utilized partitions. This dynamic partitioning model is deployed in a periodic manner. Periodic partitioning provides a new way to reduce the reasoning of exaggeration, when compared to the threat model, and eliminates the need of the further computational overhead. The third part of this work proposes a decentralized management scheme to solve aforementioned problems in the context of networks that are managed by Virtual Network Operators (VNOs). Such decentralization allows deploying a higher level of autonomic management, through which, the management responsibilities are distributed over the network nodes, each responsible for managing its outgoing links. Compared to the centralized schemes, such distribution provides higher reliability and easier bandwidth dimensioning. Moreover, it creates a form of two-sided competition framework that allows a double-auction environment among the network players, both customers and node controllers. Such competing environment provides a new way to reduce the exaggeration beside the periodic and threat models mentioned before. More important, it can deliver better utilization rates, lower blocking, and consequently higher profits. Finally, numerical experiments and empirical results are presented to support the proposed solutions, and to provide a comparison with other works from the literature
    corecore