336 research outputs found

    Deep Space Network information system architecture study

    Get PDF
    The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control

    Parallel implementation of the TRANSIMS micro-simulation

    Full text link
    This paper describes the parallel implementation of the TRANSIMS traffic micro-simulation. The parallelization method is domain decomposition, which means that each CPU of the parallel computer is responsible for a different geographical area of the simulated region. We describe how information between domains is exchanged, and how the transportation network graph is partitioned. An adaptive scheme is used to optimize load balancing. We then demonstrate how computing speeds of our parallel micro-simulations can be systematically predicted once the scenario and the computer architecture are known. This makes it possible, for example, to decide if a certain study is feasible with a certain computing budget, and how to invest that budget. The main ingredients of the prediction are knowledge about the parallel implementation of the micro-simulation, knowledge about the characteristics of the partitioning of the transportation network graph, and knowledge about the interaction of these quantities with the computer system. In particular, we investigate the differences between switched and non-switched topologies, and the effects of 10 Mbit, 100 Mbit, and Gbit Ethernet. keywords: Traffic simulation, parallel computing, transportation planning, TRANSIM

    Multilevel Parallel Communications

    Get PDF
    The research reported in this thesis investigates the use of parallelism at multiple levels to realize high-speed networks that offer advantages in throughput, cost, reliability, and flexibility over alternative approaches. This research specifically considers use of parallelism at two levels: the upper level and the lower level. At the upper level, N protocol processors perform functions included in the transport and network layers. At the lower level, M channels provide data and physical layer functions. The resulting system provides very high bandwidth to an application. A key concept of this research is the use of replicated channels to provide a single, high bandwidth channel to a single application. The parallelism provided by the network is transparent to communicating applications, thus differentiating this strategy from schemes that provide a collection of disjoint channels between applications on different nodes. Another innovative aspect of this research is that parallelism is exploited at multiple layers of the network to provide high throughput not only at the physical layer, but also at upper protocol layers. Schedulers are used to distribute data from a single stream to multiple channels and to merge data from multiple channels to reconstruct a single coherent stream. High throughput is possible by providing the combined bandwidth of multiple channels to a single source and destination through use of parallelism at multiple protocol layers. This strategy is cost effective since systems can be built using standard technologies that benefit from the economies of a broad applications base. The exotic and revolutionary components needed in non-parallel approaches to build high speed networks are not required. The replicated channels can be used to achieve high reliability as well. Multilevel parallelism is flexible since the degree of parallelism provided at any level can be matched to protocol processing demands and application requirements

    Extremely high data-rate, reliable network systems research

    Get PDF
    Significant progress was made over the year in the four focus areas of this research group: gigabit protocols, extensions of metropolitan protocols, parallel protocols, and distributed simulations. Two activities, a network management tool and the Carrier Sensed Multiple Access Collision Detection (CSMA/CD) protocol, have developed to the point that a patent is being applied for in the next year; a tool set for distributed simulation using the language SIMSCRIPT also has commercial potential and is to be further refined. The year's results for each of these areas are summarized and next year's activities are described

    Future benefits and applications of intelligent on-board processing to VSAT services

    Get PDF
    The trends and roles of VSAT services in the year 2010 time frame are examined based on an overall network and service model for that period. An estimate of the VSAT traffic is then made and the service and general network requirements are identified. In order to accommodate these traffic needs, four satellite VSAT architectures based on the use of fixed or scanning multibeam antennas in conjunction with IF switching or onboard regeneration and baseband processing are suggested. The performance of each of these architectures is assessed and the key enabling technologies are identified

    A distributed network architecture for video-on-demand

    Get PDF
    The objective of this thesis is to design a distributed network architecture that provides video - on - demand services to public subscribers. This architecture is proposed as an alternative to a centralized video service system. The latter system is currently being developed by Oracle Corporation and NCube Corporation. A simulator is developed to compare the performance of both the distributed and centralized video server architectures. Moreover, an estimate of the cost of both systems is derived using current price data. It is shown that the distributed video server architecture offers a better cost / performance trade-off than the centralized system. In addition, the distributed system can be scaled up in an incremental fashion to increase the system capacity and throughput. Finally, the distributed system is a more robust system: in the presence of component failure, it can be configured to isolate or bypass failed components. Thus, it allows for graceful performance degradation, which is difficult to achieve in a centralized system

    Journal of Telecommunications in Higher Education

    Get PDF
    This Issue: Integrating Networks ATM: It\u27s All That Matters ATM Delivers Voice, Data, Video Cabling the Integrated Network Interview: Robert Collet, Data Services & Network Systems BYU: Striving for Excellence in Telecom Service

    The X-Files: Investigating Alien Performance in a Thin-client World

    Full text link
    Many scientific applications use the X11 window environment; an open source windows GUI standard employing a client/server architecture. X11 promotes: distributed computing, thin-client functionality, cheap desktop displays, compatibility with heterogeneous servers, remote services and administration, and greater maturity than newer web technologies. This paper details the author's investigations into close encounters with alien performance in X11-based seismic applications running on a 200-node cluster, backed by 2 TB of mass storage. End-users cited two significant UFOs (Unidentified Faulty Operations) i) long application launch times and ii) poor interactive response times. The paper is divided into three major sections describing Close Encounters of the 1st Kind: citings of UFO experiences, the 2nd Kind: recording evidence of a UFO, and the 3rd Kind: contact and analysis. UFOs do exist and this investigation presents a real case study for evaluating workload analysis and other diagnostic tools.Comment: 13 pages; Invited Lecture at the High Performance Computing Conference, University of Tromso, Norway, June 27-30, 199

    IP Network Management Platforms Before the Web

    Get PDF
    In this paper, we analyze the characteristics and shortcomings of IP network management platforms before the arrival of Web technologies. In the first part, we give a brief history of IP network management, and summarize the limitations of traditional (i.e., pre-Web and SNMP-based) management platforms. We recall the initial objectives of open network management. We then explain how the early vision of generic management was changed by the industry`s natural inclination for market segmentation, and how the market of IP networks evolved from generic to vendor-specific equipment, management GUIs and MIBs. In the second part, we propose a simple model of traditional IP network management platforms, against which new Web-based management solutions can be compared. We introduce the three core functions of such platforms (network monitoring, data collection, and event handling), distinguish regular management from ad hoc management, and explain how SNMP`s polling model maps onto these functions

    Off-line computing for experimental high-energy physics

    Get PDF
    The needs of experimental high-energy physics for large-scale computing and data handling are explained in terms of the complexity of individual collisions and the need for high statistics to study quantum mechanical processes. The prevalence of university-dominated collaborations adds a requirement for high-performance wide-area networks. The data handling and computational needs of the different types of large experiment, now running or under construction, are evaluated. Software for experimental high-energy physics is reviewed briefly with particular attention to the success of packages written within the discipline. It is argued that workstations and graphics are important in ensuring that analysis codes are correct, and the worldwide networks which support the involvement of remote physicists are described. Computing and data handling are reviewed showing how workstations and RISC processors are rising in importance but have not supplanted traditional mainframe processing. Examples of computing systems constructed within high-energy physics are examined and evaluated
    corecore