44 research outputs found
Hastily Formed Networks (HFN) As an Enabler for the Emergency Response Community
The effects of natural or manmade disasters in communications infrastructures are so severe that immediately after the disaster the emergency responders are unable to use them. In addition, some areas do not have any useful infrastructure at all. To bridge this gap in communications, a need exists for a reliable technology not dependent on the existing infrastructure. This thesis focuses on first identifying the problem of communications gaps during natural or manmade disasters and reviewing the impact and potential benefit of implementing a solution based on the Hastily Formed Networks (HFN) model. The research explores the different technological solutions to solve this problem by evaluating documentation for commercial off-the-shelf technologies (COTS). Additionally, the thesis reviews the results of field experimentation conducted to evaluate the performance of these technologies in the field. The ultimate goal is to introduce the HFN concept as an enabler for the Emergency Response Community (ERC). Throughout this research, the focus revolves around testing COTS technologies. The research provides emergency responders with the background knowledge to make decisions on how to best bridge the gap of lack of communications under austere environments, and therefore enable them to provide better response.http://archive.org/details/hastilyformednet109456762Lieutenant Commander, United States Nav
Investigation Of OSI Protocols For Distributed Interactive Simulation: Final Report, A Transition Plan
Report assesses the impact of using Open System Interconnection (OSI) protocols in the distributed interactive simulation (DIS) environment
Communication Architecture For Distributed Interactive Simulation (CADIS): Military Standard (draft)
Report establishes the requirements for the communication architecture to be used in a distributed interactive simulation, including the standards and the recommended practices for implementing the communication architecture and the rationales behind them
Satellite Networks: Architectures, Applications, and Technologies
Since global satellite networks are moving to the forefront in enhancing the national and global information infrastructures due to communication satellites' unique networking characteristics, a workshop was organized to assess the progress made to date and chart the future. This workshop provided the forum to assess the current state-of-the-art, identify key issues, and highlight the emerging trends in the next-generation architectures, data protocol development, communication interoperability, and applications. Presentations on overview, state-of-the-art in research, development, deployment and applications and future trends on satellite networks are assembled
Supporting distributed computation over wide area gigabit networks
The advent of high bandwidth fibre optic links that may be used over very large distances
has lead to much research and development in the field of wide area gigabit networking. One
problem that needs to be addressed is how loosely coupled distributed systems may be built over
these links, allowing many computers worldwide to take part in complex calculations in order
to solve "Grand Challenge" problems. The research conducted as part of this PhD has looked
at the practicality of implementing a communication mechanism proposed by Craig Partridge
called Late-binding Remote Procedure Calls (LbRPC).
LbRPC is intended to export both code and data over the network to remote machines for
evaluation, as opposed to traditional RPC mechanisms that only send parameters to pre-existing
remote procedures. The ability to send code as well as data means that LbRPC requests can
overcome one of the biggest problems in Wide Area Distributed Computer Systems (WADCS):
the fixed latency due to the speed of light. As machines get faster, the fixed multi-millisecond
round trip delay equates to ever increasing numbers of CPU cycles. For a WADCS to be
efficient, programs should minimise the number of network transits they incur. By allowing the
application programmer to export arbitrary code to the remote machine, this may be achieved.
This research has looked at the feasibility of supporting secure exportation of arbitrary
code and data in heterogeneous, loosely coupled, distributed computing environments. It has
investigated techniques for making placement decisions for the code in cases where there are a
large number of widely dispersed remote servers that could be used. The latter has resulted in
the development of a novel prototype LbRPC using multicast IP for implicit placement and a
sequenced, multi-packet saturation multicast transport protocol. These prototypes show that
it is possible to export code and data to multiple remote hosts, thereby removing the need to
perform complex and error prone explicit process placement decisions
Issues in Automated Distribution of Processes Over the Networks
The main goal of this paper is t o survey the issues an application developer would have to resolve in producing a system that would be able to spread its computational load across several computers connected by a network. Before this can be done, a brief introduction to distributed and parallel computing is necessary
Recommended from our members
Computing infrastructure issues in distributed communications systems : a survey of operating system transport system architectures
The performance of distributed applications (such as file transfer, remote login, tele-conferencing, full-motion video, and scientific visualization) is influenced by several factors that interact in complex ways. In particular, application performance is significantly affected both by communication infrastructure factors and computing infrastructure factors. Several communication infrastructure factors include channel speed, bit-error rate, and congestion at intermediate switching nodes. Computing infrastructure factors include (among other things) both protocol processing activities (such as connection management, flow control, error detection, and retransmission) and general operating system factors (such as memory latency, CPU speed, interrupt and context switching overhead, process architecture, and message buffering). Due to a several orders of magnitude increase in network channel speed and an increase in application diversity, performance bottlenecks are shifting from the network factors to the transport system factors.This paper defines an abstraction called an "Operating System Transport System Architecture" (OSTSA) that is used to classify the major components and services in the computing infrastructure. End-to-end network protocols such as TCP, TP4, VMTP, XTP, and Delta-t typically run on general-purpose computers, where they utilize various operating system resources such as processors, virtual memory, and network controllers. The OSTSA provides services that integrate these resources to support distributed applications running on local and wide area networks.A taxonomy is presented to evaluate OSTSAs in terms of their support for protocol processing activities. We use this taxonomy to compare and contrast five general-purpose commercial and experimental operating systems including System V UNIX, BSD UNIX, the x-kernel, Choices, and Xinu