498 research outputs found

    The Use of Firewalls in an Academic Environment

    No full text

    Network timing and the 2015 leap second

    Full text link
    © Springer International Publishing Switzerland 2016. Using a testbed with reference timestamping, we collected timing data from public Stratum-1 NTP servers during the leap second event of end-June 2015. We found a wide variety of anomalous serverside behaviors, both at the NTP protocol level and in the server clocks themselves, which can last days or even weeks after the event. Out of 176 servers, only 61% had no erroneous behavior related to the leap second event that we could detect

    An Economic Analysis of Domain Name Policy

    Get PDF
    One of the most important features of the architecture of the Internet is the Domain Name System (DNS), which is administered by the Internet Corporation for Assigned Names and Numbers (ICANN). Logically, the DNS is organized into Top Level Domains (such as .com), Second Level Domains (such as amazon.com), and third, fourth, and higher level domains (such as www.amazon.com). The physically infrastructure of the DNS consists of name servers, including the Root Server System which provides the information that directs name queries for each Top Level Domain to the appropriate server. ICANN is responsible for the allocation of the root and the creation or reallocation of Top Level Domains. The Root Server System and associated name space are scarce resources in the economic sense. The root servers have a finite capacity and expansion of the system is costly. The name space is scarce, because each string (or set of characters) can only be allocated to one Registry (or operator of a Top Level Domain). In addition, name service is not a public good in the economic sense, because it is possible to exclude strings from the DNS and because the allocation of a string to one firm results in the inability of other firms to use that name string. From the economic perspective, therefore, the question arises: what is the most efficient method for allocating the root resource? There are only five basic options available for allocation of the root. (1) a static root, equivalent to a decision to waste the currently unallocated capacity; (2) public interest hearings (or beauty contests); (3) lotteries; (4) a queuing mechanism; or (5) an auction. The fundamental economic question about the Domain Name System is which of these provides the most efficient mechanism for allocating the root resource? This resource allocation problem is analogous to problems raised in the telecommunications sector, where the Federal Communications Commission has a long history of attempting to allocate broadcast spectrum and the telephone number space. This experience reveals that a case-by-case allocation on the basis of ad hoc judgments about the public interest is doomed to failure, and that auctions (as opposed to lotteries or queues) provide the best mechanism for insuring that such public-trust resources find their highest and best use. Based on the telecommunications experience, the best method for ICANN to allocate new Top Level Domains would be to conduct an auction. Many auction designs are possible. One proposal is to auction a fixed number of new Top Level Domain slots each year. This proposal would both expand the root resource at a reasonable pace and insure that the slots went to their highest and best use. Public interest Top Level Domains could be allocated by another mechanism such as a lottery and their costs to ICANN could be subsidized by the proceeds of the auction

    Application behavior and performance on asymmetrical links

    Get PDF
    Projecte final de carrera ralitzat en col.laboració amb Lund University. Faculty of EngineeringNowadays most of the access networks have asymmetrical bit rates; these networks (especially mobile networks) have high delays, packet losses and most often non neglectable jitter. In this degree project, we have studied the behavior of several different types of access networks with asymmetrical conditions. We have worked with the Kaunet deterministic network emulator to achieve practical results of these access networks. With the network emulator we have created different types of asymmetrical traffic data. The applications we have studied are Skype (voice/data in real time), web browsing (HTTP), TVUPlayer (P2P-TV), Bit Torrent (P2P sharing data) and data transmission in FTP. We analyzed how the protocols and the applications behaved in a theoretical way and then we tested them in a practical way with Kaunet. We studied how asymmetries affect delay times, packet losses and jitter introduced in the network and for each application we gave boundaries for a good performance for the mentioned parameters. Furthermore we analyzed bottlenecks and tried to minimize them. Finally, we compared the data from the different types of generated traffic and made some conclusions about the behavior of these applications in an asymmetrical environment

    Protocol Layering and Internet Policy

    Get PDF

    Protocol Layering and Internet Policy

    Get PDF
    An architectural principle known as protocol layering is widely recognized as one of the foundations of the Internet’s success. In addition, some scholars and industry participants have urged using the layers model as a central organizing principle for regulatory policy. Despite its importance as a concept, a comprehensive analysis of protocol layering and its implications for Internet policy has yet to appear in the literature. This Article attempts to correct this omission. It begins with a detailed description of the way the five-layer model developed, introducing protocol layering’s central features, such as the division of functions across layers, information hiding, peer communication, and encapsulation. It then discusses the model’s implications for whether particular functions are performed at the edge or in the core of the network, contrasts the model with the way that layering has been depicted in the legal commentary, and analyzes attempts to use layering as a basis for competition policy. Next the Article identifies certain emerging features of the Internet that are placing pressure on the layered model, including WiFi routers, network-based security, modern routing protocols, and wireless broadband. These developments illustrate how every architecture inevitably limits functionality as well as the architecture’s ability to evolve over time in response to changes in the technological and economic environment. Together these considerations support adopting a more dynamic perspective on layering and caution against using layers as a basis for a regulatory mandate for fear of cementing the existing technology into place in a way that prevents the network from innovating and evolving in response to shifts in the underlying technology and consumer demand

    Protocol Layering and Internet Policy

    Get PDF
    An architectural principle known as protocol layering is widely recognized as one of the foundations of the Internet’s success. In addition, some scholars and industry participants have urged using the layers model as a central organizing principle for regulatory policy. Despite its importance as a concept, a comprehensive analysis of protocol layering and its implications for Internet policy has yet to appear in the literature. This Article attempts to correct this omission. It begins with a detailed description of the way the five-layer model developed, introducing protocol layering’s central features, such as the division of functions across layers, information hiding, peer communication, and encapsulation. It then discusses the model’s implications for whether particular functions are performed at the edge or in the core of the network, contrasts the model with the way that layering has been depicted in the legal commentary, and analyzes attempts to use layering as a basis for competition policy. Next the Article identifies certain emerging features of the Internet that are placing pressure on the layered model, including WiFi routers, network-based security, modern routing protocols, and wireless broadband. These developments illustrate how every architecture inevitably limits functionality as well as the architecture’s ability to evolve over time in response to changes in the technological and economic environment. Together these considerations support adopting a more dynamic perspective on layering and caution against using layers as a basis for a regulatory mandate for fear of cementing the existing technology into place in a way that prevents the network from innovating and evolving in response to shifts in the underlying technology and consumer demand
    • …
    corecore