682 research outputs found
Deep Space Network information system architecture study
The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control
Towards an unified experimentation framework for protocol engineering
The design and development process of complex systems require an adequate methodology and efficient instrumental support in order to early detect and correct anomalies in the functional and non-functional properties of the solution. In this article, an Unified Experimentation Framework (UEF) providing experimentation facilities at both design and development stages is introduced. This UEF provides a mean to achieve experiment in both simulation mode with UML2 models of the designed protocol and emulation mode using real protocol implementation. A practical use case of the experimentation framework is illustrated in the context of satellite environment
Re-feedback: freedom with accountability for causing congestion in a connectionless internetwork
This dissertation concerns adding resource accountability to a simplex internetwork such as the Internet,
with only necessary but sufficient constraint on freedom. That is, both freedom for applications to evolve
new innovative behaviours while still responding responsibly to congestion; and freedom for network
providers to structure their pricing in any way, including flat pricing.
The big idea on which the research is built is a novel feedback arrangement termed ‘re-feedback’.
A general form is defined, as well as a specific proposal (re-ECN) to alter the Internet protocol so that
self-contained datagrams carry a metric of expected downstream congestion.
Congestion is chosen because of its central economic role as the marginal cost of network usage.
The aim is to ensure Internet resource allocation can be controlled either by local policies or by market
selection (or indeed local lack of any control).
The current Internet architecture is designed to only reveal path congestion to end-points, not networks.
The collective actions of self-interested consumers and providers should drive Internet resource
allocations towards maximisation of total social welfare. But without visibility of a cost-metric, network
operators are violating the architecture to improve their customer’s experience. The resulting fight
against the architecture is destroying the Internet’s simplicity and ability to evolve.
Although accountability with freedom is the goal, the focus is the congestion metric, and whether
an incentive system is possible that assures its integrity as it is passed between parties around the system,
despite proposed attacks motivated by self-interest and malice.
This dissertation defines the protocol and canonical examples of accountability mechanisms. Designs
are all derived from carefully motivated principles. The resulting system is evaluated by analysis
and simulation against the constraints and principles originally set. The mechanisms are proven to be
agnostic to specific transport behaviours, but they could not be made flow-ID-oblivious
Reflections on Active Networking
Interactions among telecommunications networks, computers, and other peripheral devices have been of interest since the earliest distributed computing systems. A key architectural question is the location (and nature) of programmability. One perspective, that examined in this paper, is that network elements should be as programmable as possible, in order to build the most flexible distributed computing systems.
This paper presents my personal view of the history of programmable networking over the last two decades, and in the spirit of vox audita perit, littera scripta manet , includes an account of how what is now called Active Networking came into being. It demonstrates the deep roots Active Networking has in the programming languages, networking and operating systems communities, and shows how interdisciplinary approaches can have impacts greater than the sums of their parts. Lessons are drawn both from the broader research agenda, and the specific goals pursued in the SwitchWare project. I close by speculating on possible futures for Active Networking
Netodyssey: a framework for real-time windowed analysis of network traffic
Traffic monitoring and analysis is of critical importance for managing and
designing modern computer networks, and constitutes nowadays a very active research field. In most of their studies, researchers use techniques and tools that follow a statistical approach to obtain a deeper knowledge about the traffic behaviour. Network administrators also find great value
in statistical analysis tools. Many of those tools return similar metrics calculated for common properties of network packets. This dissertation presents NetOdyssey, a framework for the statistical analysis of network
traffic. One of the crucial points of differentiation of NetOdyssey from other analysis frameworks is the windowed analysis philosophy behind NetOdyssey. This windowed analysis philosophy allows researchers who
seek for a deeper knowledge about networks, to look at traffic as if looking through a window. This approach is crucial in order to avoid the biasing effects of statistically looking at the traffic as a whole. Small fluctuations and irregularities in the network can now be analyzed, because one is
always looking through window which has a fixed size: either in number of observations or in the temporal duration of those observations. NetOdyssey is able to capture live traffic from a network card or from a
pre-collected trace, thus allowing for real-time analysis or delayed and repetitive analysis. NetOdyssey has a modular architecture making it possible for researchers with reduced programming capabilities to create analysis modules which can be tweaked and easily shared among those who utilize this framework. These modules were thought so that their implementation
is optimized according to the windowed analysis philosophy behind NetOdyssey. This optimization makes the analysis process independent from the size of the analysis window, because it only contemplates the observations coming in and going out of this window. Besides presenting this framework, its architecture and validation, the present Dissertation also presents four different analysis modules: Average and Standard deviation, Entropy, Auto-Correlation and Hurst Parameter estimators. Each of this
modules is presented and validated throughout the present dissertation.Fundação para a Ciência e a Tecnologia (FCT
Location and routing optimization protocols supporting internet host mobility
PhD ThesisWith the popularity of portable computers and the proliferation of wireless networking
interfaces, there is currently a great deal of interest in providing IP networking
support for host mobility using the Internet as a foundation for wireless
networking. Most proposed solutions depend on a default route through the mobile
host's horne address, which makes for unnecessarily long routes. The major
problem that this gives rise to is that of finding an efficient way of locating and
routing that allows datagrams to be delivered efficiently to moving destinations
whilst limiting costly Internet-wide location updates as much as possible.
Two concepts - "local region" and "patron service" - are introduced based on
the locality features of the host movement and packet traffic patterns. For each
mobile host, the local region is a set of designated subnetworks within which a
mobile host often moves, and the patrons are the hosts from which the majority of
traffic for the mobile host originated. By making use of the hierarchical addressing
and routing structure of Internet, the two concepts are used to confine the effects
of a host moving, so location updates are sent only to a designated host moving
area and to those hosts which are most likely to call again, thus providing nearly
optimal routing for most communication.
The proposed scheme was implemented as an IP extension using a network simulator
and evaluated from a system performance point of view. The results show a
significant reduction in the accumulated communication time along with improved
datagram tunneling, as compared with its extra location overhead. In addition,
a comparison with another scheme shows that our functionality is more effective
both for location update and routing efficiency. The scheme offers improved network
and host scalability by isolating local movement from the rest of the world,
and provides a convenient point at which to perform administration functions
Recommended from our members
Computing infrastructure issues in distributed communications systems : a survey of operating system transport system architectures
The performance of distributed applications (such as file transfer, remote login, tele-conferencing, full-motion video, and scientific visualization) is influenced by several factors that interact in complex ways. In particular, application performance is significantly affected both by communication infrastructure factors and computing infrastructure factors. Several communication infrastructure factors include channel speed, bit-error rate, and congestion at intermediate switching nodes. Computing infrastructure factors include (among other things) both protocol processing activities (such as connection management, flow control, error detection, and retransmission) and general operating system factors (such as memory latency, CPU speed, interrupt and context switching overhead, process architecture, and message buffering). Due to a several orders of magnitude increase in network channel speed and an increase in application diversity, performance bottlenecks are shifting from the network factors to the transport system factors.This paper defines an abstraction called an "Operating System Transport System Architecture" (OSTSA) that is used to classify the major components and services in the computing infrastructure. End-to-end network protocols such as TCP, TP4, VMTP, XTP, and Delta-t typically run on general-purpose computers, where they utilize various operating system resources such as processors, virtual memory, and network controllers. The OSTSA provides services that integrate these resources to support distributed applications running on local and wide area networks.A taxonomy is presented to evaluate OSTSAs in terms of their support for protocol processing activities. We use this taxonomy to compare and contrast five general-purpose commercial and experimental operating systems including System V UNIX, BSD UNIX, the x-kernel, Choices, and Xinu
- …