6 research outputs found

    Technology Directions for the 21st Century

    Get PDF
    New technologies will unleash the huge capacity of fiber-optic cable to meet growing demands for bandwidth. Companies will continue to replace private networks with public network bandwidth-on-demand. Although asynchronous transfer mode (ATM) is the transmission technology favored by many, its penetration will be slower than anticipated. Hybrid networks - e.g., a mix of ATM, frame relay, and fast Ethernet - may predominate, both as interim and long-term solutions, based on factors such as availability, interoperability, and cost. Telecommunications equipment and services prices will decrease further due to increased supply and more competition. Explosive Internet growth will continue, requiring additional backbone transmission capacity and enhanced protocols, but it is not clear who will fund the upgrade. Within ten years, space-based constellations of satellites in Low Earth orbit (LEO) will serve mobile users employing small, low-power terminals. 'Little LEO's' will provide packet transmission services and geo-position determination. 'Big LEO's' will function as global cellular telephone networks, with some planning to offer video and interactive multimedia services. Geosynchronous satellites also are proposed for mobile voice grade links and high-bandwidth services. NASA may benefit from resulting cost reductions in components, space hardware, launch services, and telecommunications services

    High performance computing and communications: FY 1995 implementation plan

    Full text link

    Supporting distributed computation over wide area gigabit networks

    Get PDF
    The advent of high bandwidth fibre optic links that may be used over very large distances has lead to much research and development in the field of wide area gigabit networking. One problem that needs to be addressed is how loosely coupled distributed systems may be built over these links, allowing many computers worldwide to take part in complex calculations in order to solve "Grand Challenge" problems. The research conducted as part of this PhD has looked at the practicality of implementing a communication mechanism proposed by Craig Partridge called Late-binding Remote Procedure Calls (LbRPC). LbRPC is intended to export both code and data over the network to remote machines for evaluation, as opposed to traditional RPC mechanisms that only send parameters to pre-existing remote procedures. The ability to send code as well as data means that LbRPC requests can overcome one of the biggest problems in Wide Area Distributed Computer Systems (WADCS): the fixed latency due to the speed of light. As machines get faster, the fixed multi-millisecond round trip delay equates to ever increasing numbers of CPU cycles. For a WADCS to be efficient, programs should minimise the number of network transits they incur. By allowing the application programmer to export arbitrary code to the remote machine, this may be achieved. This research has looked at the feasibility of supporting secure exportation of arbitrary code and data in heterogeneous, loosely coupled, distributed computing environments. It has investigated techniques for making placement decisions for the code in cases where there are a large number of widely dispersed remote servers that could be used. The latter has resulted in the development of a novel prototype LbRPC using multicast IP for implicit placement and a sequenced, multi-packet saturation multicast transport protocol. These prototypes show that it is possible to export code and data to multiple remote hosts, thereby removing the need to perform complex and error prone explicit process placement decisions

    High performance computing and communications: FY 1996 implementation plan

    Full text link

    High performance computing and communications: FY 1997 implementation plan

    Full text link

    European Information Technology Observatory 1995

    Get PDF
    corecore