149 research outputs found

    Exposed Buffer Architecture

    Full text link
    The Internet stack is not a complete description of the resources and services needed to implement distributed applications, as it only accounts for communication services and the protocols that are defined to deliver them. This paper presents an account of the current distributed application architecture using a formal model of strictly layered systems, meaning that services in any layer can only depend on services in the layer immediately below it. By mapping a more complete Internet-based application stack that includes necessary storage and processing resources to this formal model, we are able to apply the Hourglass Theorem in order to compare alternative approaches in terms of their "deployment scalability." In particular, we contrast the current distributed application stack with Exposed Buffer Architecture, which has a converged spanning layer that allows for less-than-complete communication connectivity (exposing lower layer topology), but which also offers weak storage and processing services. This comparison shows that Exposed Buffer Architecture can have deployment scalability greater than the current distributed application stack while also providing minimally requisite storage and processing services

    On The Hourglass Model, The End-to-End Principle and Deployment Scalability

    Get PDF
    The hourglass model is a widely used as a means of describing the design of the Internet, and can be found in the introduction of many modern textbooks. It arguably also applies to the design of other successful spanning layers, notably the Unix operating system kernel interface, meaning the primitive system calls and the interactions between user processes and the kernel. The impressive success of the Internet has led to a wider interest in using the hourglass model in other layered systems, with the goal of achieving similar results. However, application of the hourglass model has often led to controversy, perhaps in part because the language in which it has been expressed has been informal, and arguments for its validity have not been precise. Making a start on formalizing such an argument is the goal of this paper

    How We Ruined The Internet

    Full text link
    At the end of the 19th century the logician C.S. Peirce coined the term "fallibilism" for the "... the doctrine that our knowledge is never absolute but always swims, as it were, in a continuum of uncertainty and of indeterminacy". In terms of scientific practice, this means we are obliged to reexamine the assumptions, the evidence, and the arguments for conclusions that subsequent experience has cast into doubt. In this paper we examine an assumption that underpinned the development of the Internet architecture, namely that a loosely synchronous point-to-point datagram delivery service could adequately meet the needs of all network applications, including those which deliver content and services to a mass audience at global scale. We examine how the inability of the Networking community to provide a public and affordable mechanism to support such asynchronous point-to-multipoint applications led to the development of private overlay infrastructure, namely CDNs and Cloud networks, whose architecture stands at odds with the Open Data Networking goals of the early Internet advocates. We argue that the contradiction between those initial goals and the monopolistic commercial imperatives of hypergiant overlay infrastructure operators is an important reason for the apparent contradiction posed by the negative impact of their most profitable applications (e.g., social media) and strategies (e.g., targeted advertisement). We propose that, following the prescription of Peirce, we can only resolve this contradiction by reconsidering some of our deeply held assumptions

    Final report on work for Center for Gyrokinetic Particle Simulation of Turbulent Transport in Burning Plasmas — Tools for Improved Data Logistics

    Get PDF
    This project focused on the use of Logistical Networking technology to address the challenges involved in rapid sharing of data from the the Center's gyrokinetic particle simulations, which can be on the order of terabytes per time step, among researchers at a number of geographically distributed locations. There is a great need to manage data on this scale in a flexible manner, with simulation code, file system, database and visualization functions requiring access. The project used distributed data management infrastructure based on Logistical Networking technology to address these issues in a way that maximized interoperability and achieved the levels of performance the required by the Center's application community. The work focused on the development and deployment of software tools and infrastructure for the storage and distribution of terascale datasets generated by simulations running at the National Center for Computational Science at Oak Ridge National Laboratory

    On The Hourglass Model, The End-to-End Principle and Deployment Scalability

    Get PDF
    The hourglass model is a widely used as a means of describing the design of the Internet, and can be found in the introduction of many modern textbooks. It arguably also applies to the design of other successful spanning layers, notably the Unix operating system kernel interface, meaning the primitive system calls and the interactions between user processes and the kernel. The impressive success of the Internet has led to a wider interest in using the hourglass model in other layered systems, with the goal of achieving similar results. However, application of the hourglass model has often led to controversy, perhaps in part because the language in which it has been expressed has been informal, and arguments for its validity have not been precise. Making a start on formalizing such an argument is the goal of this paper

    Relationships with God among Young Adults: Validating a Measurement Model with Four Dimensions

    Get PDF
    Experiencing a relationship with God is widely acknowledged as an important aspect of personal religiosity for both affiliated and unaffiliated young adults, but surprisingly few attempts have been made to develop measures appropriate to its latent, multidimensional quality. This paper presents a new model for measuring relationships with God based on religious role theory, attachment to God theory, and insights from interview-based studies, which allows for a wider array of dimensions than have been considered in prior work: anger, anxiety, intimacy, and consistency. To test our model's internal validity, we use confirmatory factor analysis with nationally representative data. To test its external validity, we (1) use difference-in-means tests across gender, race/ethnicity, geographical region, and religious affiliation; and (2) analyze correlations between our four new dimensions and four other commonly used measures of religiosity, thereby demonstrating both our model's validity and value for future studies of personal religiosity

    Beyond labeled lines: A population coding account of the thermal grill illusion

    Get PDF
    Heat and pain illusions (synthetic heat and the thermal grill illusion) can be generated by simultaneous cold and warm stimulation on the skin at temperatures that would normally be perceived as innocuous in isolation. Historically, two key questions have dominated the literature: which specific pathway conveys the illusory perceptions of heat and pain, and where, specifically, does the illusory pain originate in the central nervous system? Two major theories - the addition and disinhibition theories - have suggested distinct pathways, as well as specific spinal or supraspinal mechanisms. However, both theories fail to fully explain experimental findings on illusory heat and pain phenomena. We suggest that the disagreement between previous theories and experimental evidence can be solved by abandoning the assumption of one-to-one relations between pathways and perceived qualities. We argue that a population coding framework, based on distributed activity across non-nociceptive and nociceptive pathways, offers a more powerful explanation of illusory heat and pain. This framework offers new hypotheses regarding the neural mechanisms underlying temperature and pain perception

    The Logistical Backbone: Scalable Infrastructure for Global Data Grids

    Full text link
    Logistical Networking 1 can be defined as the global optimisation and scheduling of data storage, data movement, and computation. It is a technology for shared network storage that allows an easy scaling in terms of the size of the user community, the aggregate quantity of storage that can be allocated, and the distribution breadth of service nodes across network borders. After describing the base concepts of Logistical Networking, we will introduce the Internet Backplane Protocol, a middleware for managing and using remote storage through allocation of primitive “byte arrays”, showing a semantic in between buffer block and common files. As this characteristic can be too limiting for a large number of applications, we developed the exNode, that can be defined, in two words, as an inode for the for network distributed files. We will introduce then the Logistical Backbone, or L-Bone, is a distributed set of facilities that aim to provide highperformance, location- and application-independent access to storage for network and Grid applications of all kind
    corecore