2,336 research outputs found
The battle between standards: TCP/IP vs OSI victory through path dependency or by quality?
Between the end of the 1970s and 1994 a fierce competition existed between two possible standards, TCP/IP and OSI, to solve the problem of interoperability of computer networks. Around 1994 it became evident that TCP/IP and not OSI had become the dominant standard. We specifically deal with the question whether the current dominance of the TCP/IP standard is the result of third degree path dependency or of choices based on assessments of it being technical-economically superior to the OSI standard and protocols
Research into alternative network approaches for space operations
The main goal is to resolve the interoperability problem of applications employing DOD TCP/IP (Department of Defence Transmission Control Protocol/Internet Protocol) family of protocols on a CCITT/ISO based network. The objective is to allow them to communicate over the CCITT/ISO protocol GPLAN (General Purpose Local Area Network) network without modification to the user's application programs. There were two primary assumptions associated with the solution that was actually realized. The first is that the solution had to allow for future movement to the exclusive use of the CCITT/ISO standards. The second is that the solution had to be software transparent to the currently installed TCP/IP and CCITT/ISO user application programs
Recommended from our members
Computing infrastructure issues in distributed communications systems : a survey of operating system transport system architectures
The performance of distributed applications (such as file transfer, remote login, tele-conferencing, full-motion video, and scientific visualization) is influenced by several factors that interact in complex ways. In particular, application performance is significantly affected both by communication infrastructure factors and computing infrastructure factors. Several communication infrastructure factors include channel speed, bit-error rate, and congestion at intermediate switching nodes. Computing infrastructure factors include (among other things) both protocol processing activities (such as connection management, flow control, error detection, and retransmission) and general operating system factors (such as memory latency, CPU speed, interrupt and context switching overhead, process architecture, and message buffering). Due to a several orders of magnitude increase in network channel speed and an increase in application diversity, performance bottlenecks are shifting from the network factors to the transport system factors.This paper defines an abstraction called an "Operating System Transport System Architecture" (OSTSA) that is used to classify the major components and services in the computing infrastructure. End-to-end network protocols such as TCP, TP4, VMTP, XTP, and Delta-t typically run on general-purpose computers, where they utilize various operating system resources such as processors, virtual memory, and network controllers. The OSTSA provides services that integrate these resources to support distributed applications running on local and wide area networks.A taxonomy is presented to evaluate OSTSAs in terms of their support for protocol processing activities. We use this taxonomy to compare and contrast five general-purpose commercial and experimental operating systems including System V UNIX, BSD UNIX, the x-kernel, Choices, and Xinu
Deep Space Network information system architecture study
The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control
OMS FDIR: Initial prototyping
The Space Station Freedom Program (SSFP) Operations Management System (OMS) will automate major management functions which coordinate the operations of onboard systems, elements and payloads. The objectives of OMS are to improve safety, reliability and productivity while reducing maintenance and operations cost. This will be accomplished by using advanced automation techniques to automate much of the activity currently performed by the flight crew and ground personnel. OMS requirements have been organized into five task groups: (1) Planning, Execution and Replanning; (2) Data Gathering, Preprocessing and Storage; (3) Testing and Training; (4) Resource Management; and (5) Caution and Warning and Fault Management for onboard subsystems. The scope of this prototyping effort falls within the Fault Management requirements group. The prototyping will be performed in two phases. Phase 1 is the development of an onboard communications network fault detection, isolation, and reconfiguration (FDIR) system. Phase 2 will incorporate global FDIR for onboard systems. Research into the applicability of expert systems, object-oriented programming, fuzzy sets, neural networks and other advanced techniques will be conducted. The goals and technical approach for this new SSFP research project are discussed here
Communication and control in small batch part manufacturing
This paper reports on the development of a real-time control network as an integrated part of a shop floor control system for small batch part manufacturing. The shop floor control system is called the production control system (PCS). The PCS aims at an improved control of small batch part manufacturing systems, enabling both a more flexible use of resources and a decrease in the economical batch size. For this, the PCS integrates various control functions such as scheduling, dispatching, workstation control and monitoring, whilst being connected on-line to the production equipment on the shop floor. The PCS can be applied irrespective of the level of automation on the shop floor. The control network is an essential part of the PCS, as it provides a real-time connection between the different modules (computers) of the PCS, which are geographically distributed over the shop floor. An overview of the requirements of such a control network is given. The description of the design includes the services developed, the protocols used and the physical layout of the network. A prototype of the PCS, including the control network, has been installed and tested in a pilot plant. The control network has proven that it can supply a manufacturing environment, consisting of equipment from different vendors with different levels of automation, with a reliable, low cost, real-time communication facility
- …