17,672 research outputs found
Recommended from our members
Low tech connections into the ARPA internet : the RawPacket split-gateway
This report describes a "low technology" method for connecting into the ARPA Internet. The use of a RawPacket interface in a system which supoprts IP makes possible the construction of a split-gateway between two hosts. The RawPacket interface permits a user-level process to introduce arbitrary packets into the IP layer, resulting in a virtual network interface. Since the split-gateway is implemented using a RawPacket interface, two networks may be connected together using a convenient medium which does not require explicit kernel support. Hence, split-gateways are well-suited for use as stub-gateways, connecting a local network to a long-haul network such as the ARPA backbone. In particular, the split-gateway discussed in this report achieves a reasonable level of connectivity for a comparatively small expenditure.This report details how the RawPacket software and split-gateways are implemented. In addition, various daemon configurations are presented, modifications to the operating environment are discussed, and some performance measurements are given
TCPSnitch: Dissecting the Usage of the Socket API
Networked applications interact with the TCP/IP stack through the socket API.
Over the years, various extensions have been added to this popular API. In this
paper, we propose and implement the TCPSnitch software that tracks the
interactions between Linux and Android applications and the TCP/IP stack. We
collect a dataset containing the interactions produced by more than 120
different applications. Our analysis reveals that applications use a variety of
API calls. On Android, many applications use various socket options even if the
Java API does not expose them directly. TCPSnitch and the associated dataset
are publicly available.Comment: See https://www.tcpsnitch.or
Functional requirements document for the Earth Observing System Data and Information System (EOSDIS) Scientific Computing Facilities (SCF) of the NASA/MSFC Earth Science and Applications Division, 1992
Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented
Off-line computing for experimental high-energy physics
The needs of experimental high-energy physics for large-scale computing and data handling are explained in terms of the complexity of individual collisions and the need for high statistics to study quantum mechanical processes. The prevalence of university-dominated collaborations adds a requirement for high-performance wide-area networks. The data handling and computational needs of the different types of large experiment, now running or under construction, are evaluated. Software for experimental high-energy physics is reviewed briefly with particular attention to the success of packages written within the discipline. It is argued that workstations and graphics are important in ensuring that analysis codes are correct, and the worldwide networks which support the involvement of remote physicists are described. Computing and data handling are reviewed showing how workstations and RISC processors are rising in importance but have not supplanted traditional mainframe processing. Examples of computing systems constructed within high-energy physics are examined and evaluated
Internet... the final frontier: an ethnographic account: exploring the cultural space of the Net from the inside
The research project The Internet as a space for interaction, which completed its mission in Autumn 1998, studied the constitutive features of network culture and network organisation. Special emphasis was given to the dynamic interplay of technical and social conventions regarding both the Netâs organisation as well as its change. The ethnographic perspective chosen studied the Internet from the inside. Research concentrated upon three fields of study: the hegemonial operating technology of net nodes (UNIX) the networkâs basic transmission technology (the Internet Protocol IP) and a popular communication service (Usenet). The projectâs final report includes the results of the three branches explored. Drawing upon the development in the three fields it is shown that changes that come about on the Net are neither anarchic nor arbitrary. Instead, the decentrally organised Internet is based upon technically and organisationally distributed forms of coordination within which individual preferences collectively attain the power of developing into definitive standards. --
Monkeys, typewriters and networks: the internet in the light of the theory of accidental excellence
Viewed in the light of the theory of accidental excellence, there is much to suggest that the success of the Internet and its various protocols derives from a communications technology accident, or better, a series of accidents. In the early 1990s, many experts still saw the Internet as an academic toy that would soon vanish into thin air again. The Internet probably gained its reputation as an academic toy largely because it violated the basic principles of traditional communications networks. The quarrel about paradigms that erupted in the 1970s between the telephony world and the newly emerging Internet community was not, however, only about transmission technology doctrines. It was also about the question â still unresolved today â as to who actually governs the flow of information: the operators or the users of the network? The paper first describes various network architectures in relation to the communication cultures expressed in their make-up. It then examines the creative environment found at the nodes of the network, whose coincidental importance for the Internet boom must not be forgotten. Finally, the example of Usenet is taken to look at the kind of regulatory practices that have emerged in the communications services provided within the framework of a decentralised network architecture. --
- âŠ