24 research outputs found

    Protocol Layering and Internet Policy

    Get PDF
    An architectural principle known as protocol layering is widely recognized as one of the foundations of the Internet’s success. In addition, some scholars and industry participants have urged using the layers model as a central organizing principle for regulatory policy. Despite its importance as a concept, a comprehensive analysis of protocol layering and its implications for Internet policy has yet to appear in the literature. This Article attempts to correct this omission. It begins with a detailed description of the way the five-layer model developed, introducing protocol layering’s central features, such as the division of functions across layers, information hiding, peer communication, and encapsulation. It then discusses the model’s implications for whether particular functions are performed at the edge or in the core of the network, contrasts the model with the way that layering has been depicted in the legal commentary, and analyzes attempts to use layering as a basis for competition policy. Next the Article identifies certain emerging features of the Internet that are placing pressure on the layered model, including WiFi routers, network-based security, modern routing protocols, and wireless broadband. These developments illustrate how every architecture inevitably limits functionality as well as the architecture’s ability to evolve over time in response to changes in the technological and economic environment. Together these considerations support adopting a more dynamic perspective on layering and caution against using layers as a basis for a regulatory mandate for fear of cementing the existing technology into place in a way that prevents the network from innovating and evolving in response to shifts in the underlying technology and consumer demand

    Protocol Layering and Internet Policy

    Get PDF
    An architectural principle known as protocol layering is widely recognized as one of the foundations of the Internet’s success. In addition, some scholars and industry participants have urged using the layers model as a central organizing principle for regulatory policy. Despite its importance as a concept, a comprehensive analysis of protocol layering and its implications for Internet policy has yet to appear in the literature. This Article attempts to correct this omission. It begins with a detailed description of the way the five-layer model developed, introducing protocol layering’s central features, such as the division of functions across layers, information hiding, peer communication, and encapsulation. It then discusses the model’s implications for whether particular functions are performed at the edge or in the core of the network, contrasts the model with the way that layering has been depicted in the legal commentary, and analyzes attempts to use layering as a basis for competition policy. Next the Article identifies certain emerging features of the Internet that are placing pressure on the layered model, including WiFi routers, network-based security, modern routing protocols, and wireless broadband. These developments illustrate how every architecture inevitably limits functionality as well as the architecture’s ability to evolve over time in response to changes in the technological and economic environment. Together these considerations support adopting a more dynamic perspective on layering and caution against using layers as a basis for a regulatory mandate for fear of cementing the existing technology into place in a way that prevents the network from innovating and evolving in response to shifts in the underlying technology and consumer demand

    Protocol Layering and Internet Policy

    Get PDF

    Signals in the Black Stack / GEOMEtyr Design Manual

    Full text link
    The following white paper provides a critical accompaniment to my capstone project: the GEOMEtyr Design Manual. GEOMEtyr is a virtual reality to be made accessible as a mobile and web platform for the visualization of certain systemic elements of a utopic world that parallels our own planet’s geographies, polities, and climates. As such, the GEOMEtyr virtualization is designed to derive utopian space from the informational structures of our own world. The operations by which this may be accomplished are broadly described within the accompanying GEOMEtyr manual. The white paper, Signals in the Black Stack, elaborates vital world-building characteristics of informational systems while examining historical instances where the reconfiguration of human informational relationships have signaled to participants a profound, vaguely utopic change to their scope of possible actions as agents within a public sphere—before this sphere becomes subsequently disempowered by selective integration with only those power structures admitted by the private interests of a ruling class. The argument proceeds by scrutinizing each facet of this story in turn: first, the concept of information as an entity with structure; next, the development of the public sphere as a realm with distinguished character; then, the advent of our Internet as an operation with globalizing consequence. Each of these are developed distinctly and in parts, piece by piece, until their parallels are unified for final analysis

    Performance and policy dimensions in internet routing

    Get PDF
    The Internet Routing Project, referred to in this report as the 'Highball Project', has been investigating architectures suitable for networks spanning large geographic areas and capable of very high data rates. The Highball network architecture is based on a high speed crossbar switch and an adaptive, distributed, TDMA scheduling algorithm. The scheduling algorithm controls the instantaneous configuration and swell time of the switch, one of which is attached to each node. In order to send a single burst or a multi-burst packet, a reservation request is sent to all nodes. The scheduling algorithm then configures the switches immediately prior to the arrival of each burst, so it can be relayed immediately without requiring local storage. Reservations and housekeeping information are sent using a special broadcast-spanning-tree schedule. Progress to date in the Highball Project includes the design and testing of a suite of scheduling algorithms, construction of software reservation/scheduling simulators, and construction of a strawman hardware and software implementation. A prototype switch controller and timestamp generator have been completed and are in test. Detailed documentation on the algorithms, protocols and experiments conducted are given in various reports and papers published. Abstracts of this literature are included in the bibliography at the end of this report, which serves as an extended executive summary

    Communication satellite technology: State of the art and development opportunities

    Get PDF
    Opportunities in communication satellite technology are identified and defined. Factors that tend to limit the ready availability of satellite communication to an increasingly wide group of users are evaluated. Current primary limitations on this wide utilization are the availability of frequency and/or synchronous equatorial satellite positions and the cost of individual user Earth terminals. The former could be ameliorated through the reuse of frequencies, the use of higher frequency bands, and the reduction of antenna side lobes. The latter limitation requires innovative hardware, design, careful system design, and large scale production

    Satellite Networks: Architectures, Applications, and Technologies

    Get PDF
    Since global satellite networks are moving to the forefront in enhancing the national and global information infrastructures due to communication satellites' unique networking characteristics, a workshop was organized to assess the progress made to date and chart the future. This workshop provided the forum to assess the current state-of-the-art, identify key issues, and highlight the emerging trends in the next-generation architectures, data protocol development, communication interoperability, and applications. Presentations on overview, state-of-the-art in research, development, deployment and applications and future trends on satellite networks are assembled

    Data distribution satellite

    Get PDF
    A description is given of a data distribution satellite (DDS) system. The DDS would operate in conjunction with the tracking and data relay satellite system to give ground-based users real time, two-way access to instruments in space and space-gathered data. The scope of work includes the following: (1) user requirements are derived; (2) communication scenarios are synthesized; (3) system design constraints and projected technology availability are identified; (4) DDS communications payload configuration is derived, and the satellite is designed; (5) requirements for earth terminals and network control are given; (6) system costs are estimated, both life cycle costs and user fees; and (7) technology developments are recommended, and a technology development plan is given. The most important results obtained are as follows: (1) a satellite designed for launch in 2007 is feasible and has 10 Gb/s capacity, 5.5 kW power, and 2000 kg mass; (2) DDS features include on-board baseband switching, use of Ku- and Ka-bands, multiple optical intersatellite links; and (3) system user costs are competitive with projected terrestrial communication costs

    “INDUSTRIAL LEGISLATURES”: CONSENSUS STANDARDIZATION IN THE SECOND AND THIRD INDUSTRIAL REVOLUTIONS

    Get PDF
    Consensus standardization is a social process in which technical experts from public, private, and non-profit sectors negotiate the direction and shape of technological change. Scholars in a variety of disciplines have recognized the importance of consensus standards as alternatives to standards that arise through market mechanisms or standards mandated by regulators. Rather than treating the consensus method as some sort of timeless organizational form or ever-present alternative to markets or laws, I argue that consensus standardization is itself a product of history. In the first two chapters, I explain the origins and growth of consensus standards bodies between 1880 and 1930 as a reaction to and critique of the existing political economy of engineering. By considering the standardization process—instead of the internal dynamics of a particular firm or technology—as the primary category of analysis, I am able to emphasize the cooperative relations that sustained the American style of competitive managerial capitalism during the Second Industrial Revolution. In the remaining four chapters, I examine the processes of network architecture and standardization in the creation of four communications networks during the twentieth century: AT&T’s monopoly telephone network, the Internet, digital cellular telephone networks, and the World Wide Web. Each of these four networks embodied critiques—always implicit and frequently explicit—of preceding and competing networks. These critiques, visible both in the technological design of networks as well as in the institutional design of standard-setting bodies, reflected the political convictions of successive generations of engineers and network architects. The networks described in this dissertation were thus turning points in the century-long development of an organizational form. Seen as part of a common history, they tell the story of how consensus-based institutions became the dominant mode for setting standards in the Third Industrial Revolution, and created the foundational standards of the information infrastructures upon which a newly globalized economy and society—the Network Society—could grow
    corecore