149 research outputs found
Exposed Buffer Architecture
The Internet stack is not a complete description of the resources and
services needed to implement distributed applications, as it only accounts for
communication services and the protocols that are defined to deliver them. This
paper presents an account of the current distributed application architecture
using a formal model of strictly layered systems, meaning that services in any
layer can only depend on services in the layer immediately below it. By mapping
a more complete Internet-based application stack that includes necessary
storage and processing resources to this formal model, we are able to apply the
Hourglass Theorem in order to compare alternative approaches in terms of their
"deployment scalability." In particular, we contrast the current distributed
application stack with Exposed Buffer Architecture, which has a converged
spanning layer that allows for less-than-complete communication connectivity
(exposing lower layer topology), but which also offers weak storage and
processing services. This comparison shows that Exposed Buffer Architecture can
have deployment scalability greater than the current distributed application
stack while also providing minimally requisite storage and processing services
On The Hourglass Model, The End-to-End Principle and Deployment Scalability
The hourglass model is a widely used as a means of describing the design of the Internet, and can be found in the introduction of many modern textbooks. It arguably also applies to the design of other successful spanning layers, notably the Unix operating system kernel interface, meaning the primitive system calls and the interactions between user processes and the kernel. The impressive success of the Internet has led to a wider interest in using the hourglass model in other layered systems, with the goal of achieving similar results. However, application of the hourglass model has often led to controversy, perhaps in part because the language in which it has been expressed has been informal, and arguments for its validity have not been precise. Making a start on formalizing such an argument is the goal of this paper
How We Ruined The Internet
At the end of the 19th century the logician C.S. Peirce coined the term
"fallibilism" for the "... the doctrine that our knowledge is never absolute
but always swims, as it were, in a continuum of uncertainty and of
indeterminacy". In terms of scientific practice, this means we are obliged to
reexamine the assumptions, the evidence, and the arguments for conclusions that
subsequent experience has cast into doubt. In this paper we examine an
assumption that underpinned the development of the Internet architecture,
namely that a loosely synchronous point-to-point datagram delivery service
could adequately meet the needs of all network applications, including those
which deliver content and services to a mass audience at global scale. We
examine how the inability of the Networking community to provide a public and
affordable mechanism to support such asynchronous point-to-multipoint
applications led to the development of private overlay infrastructure, namely
CDNs and Cloud networks, whose architecture stands at odds with the Open Data
Networking goals of the early Internet advocates. We argue that the
contradiction between those initial goals and the monopolistic commercial
imperatives of hypergiant overlay infrastructure operators is an important
reason for the apparent contradiction posed by the negative impact of their
most profitable applications (e.g., social media) and strategies (e.g.,
targeted advertisement). We propose that, following the prescription of Peirce,
we can only resolve this contradiction by reconsidering some of our deeply held
assumptions
Final report on work for Center for Gyrokinetic Particle Simulation of Turbulent Transport in Burning Plasmas — Tools for Improved Data Logistics
This project focused on the use of Logistical Networking technology to address the challenges involved in rapid sharing of data from the the Center's gyrokinetic particle simulations, which can be on the order of terabytes per time step, among researchers at a number of geographically distributed locations. There is a great need to manage data on this scale in a flexible manner, with simulation code, file system, database and visualization functions requiring access. The project used distributed data management infrastructure based on Logistical Networking technology to address these issues in a way that maximized interoperability and achieved the levels of performance the required by the Center's application community. The work focused on the development and deployment of software tools and infrastructure for the storage and distribution of terascale datasets generated by simulations running at the National Center for Computational Science at Oak Ridge National Laboratory
On The Hourglass Model, The End-to-End Principle and Deployment Scalability
The hourglass model is a widely used as a means of describing the design of the Internet, and can be found in the introduction of many modern textbooks. It arguably also applies to the design of other successful spanning layers, notably the Unix operating system kernel interface, meaning the primitive system calls and the interactions between user processes and the kernel. The impressive success of the Internet has led to a wider interest in using the hourglass model in other layered systems, with the goal of achieving similar results. However, application of the hourglass model has often led to controversy, perhaps in part because the language in which it has been expressed has been informal, and arguments for its validity have not been precise. Making a start on formalizing such an argument is the goal of this paper
Recommended from our members
Final Project Report: DOE Award FG02‐04ER25606 Overlay Transit Networking for Scalable, High Performance Data Communication across Heterogeneous Infrastructure
As the flood of data associated with leading edge computational science continues to escalate, the challenge of supporting the distributed collaborations that are now characteristic of it becomes increasingly daunting. The chief obstacles to progress on this front lie less in the synchronous elements of collaboration, which have been reasonably well addressed by new global high performance networks, than in the asynchronous elements, where appropriate shared storage infrastructure seems to be lacking. The recent report from the Department of Energy on the emerging 'data management challenge' captures the multidimensional nature of this problem succinctly: Data inevitably needs to be buffered, for periods ranging from seconds to weeks, in order to be controlled as it moves through the distributed and collaborative research process. To meet the diverse and changing set of application needs that different research communities have, large amounts of non-archival storage are required for transitory buffering, and it needs to be widely dispersed, easily available, and configured to maximize flexibility of use. In today's grid fabric, however, massive storage is mostly concentrated in data centers, available only to those with user accounts and membership in the appropriate virtual organizations, allocated as if its usage were non-transitory, and encapsulated behind legacy interfaces that inhibit the flexibility of use and scheduling. This situation severely restricts the ability of application communities to access and schedule usable storage where and when they need to in order to make their workflow more productive. (p.69f) One possible strategy to deal with this problem lies in creating a storage infrastructure that can be universally shared because it provides only the most generic of asynchronous services. Different user communities then define higher level services as necessary to meet their needs. One model of such a service is a Storage Network, analogous to those used within computation centers, but designed to operate on a global scale. Building on a basic storage service that is as primitive as possible, such a Global Storage Network would define a framework within which higher level services can be created. If this framework enabled a variety of more specialized middleware and supported a wide array of applications, then interoperability and collaboration could occur based on that common framework. The research in Logistical Networking (LN) carried out under the DOE's SciDAC program tested the value of this approach within the context of several SciDAC application communities. Below we briefly describe the basic design of the LN storage network and some of the results that the Logistical Networking community has achieved
Relationships with God among Young Adults: Validating a Measurement Model with Four Dimensions
Experiencing a relationship with God is widely acknowledged as an important aspect of personal religiosity for both affiliated and unaffiliated young adults, but surprisingly few attempts have been made to develop measures appropriate to its latent, multidimensional quality. This paper presents a new model for measuring relationships with God based on religious role theory, attachment to God theory, and insights from interview-based studies, which allows for a wider array of dimensions than have been considered in prior work: anger, anxiety, intimacy, and consistency. To test our model's internal validity, we use confirmatory factor analysis with nationally representative data. To test its external validity, we (1) use difference-in-means tests across gender, race/ethnicity, geographical region, and religious affiliation; and (2) analyze correlations between our four new dimensions and four other commonly used measures of religiosity, thereby demonstrating both our model's validity and value for future studies of personal religiosity
Beyond labeled lines: A population coding account of the thermal grill illusion
Heat and pain illusions (synthetic heat and the thermal grill illusion) can be generated by simultaneous cold and warm stimulation on the skin at temperatures that would normally be perceived as innocuous in isolation. Historically, two key questions have dominated the literature: which specific pathway conveys the illusory perceptions of heat and pain, and where, specifically, does the illusory pain originate in the central nervous system? Two major theories - the addition and disinhibition theories - have suggested distinct pathways, as well as specific spinal or supraspinal mechanisms. However, both theories fail to fully explain experimental findings on illusory heat and pain phenomena. We suggest that the disagreement between previous theories and experimental evidence can be solved by abandoning the assumption of one-to-one relations between pathways and perceived qualities. We argue that a population coding framework, based on distributed activity across non-nociceptive and nociceptive pathways, offers a more powerful explanation of illusory heat and pain. This framework offers new hypotheses regarding the neural mechanisms underlying temperature and pain perception
The Logistical Backbone: Scalable Infrastructure for Global Data Grids
Logistical Networking 1 can be defined as the global optimisation and scheduling of data storage, data movement, and computation. It is a technology for shared network storage that allows an easy scaling in terms of the size of the user community, the aggregate quantity of storage that can be allocated, and the distribution breadth of service nodes across network borders. After describing the base concepts of Logistical Networking, we will introduce the Internet Backplane Protocol, a middleware for managing and using remote storage through allocation of primitive “byte arrays”, showing a semantic in between buffer block and common files. As this characteristic can be too limiting for a large number of applications, we developed the exNode, that can be defined, in two words, as an inode for the for network distributed files. We will introduce then the Logistical Backbone, or L-Bone, is a distributed set of facilities that aim to provide highperformance, location- and application-independent access to storage for network and Grid applications of all kind
- …