3,562 research outputs found
Deep Space Network information system architecture study
The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control
Datacenter Traffic Control: Understanding Techniques and Trade-offs
Datacenters provide cost-effective and flexible access to scalable compute
and storage resources necessary for today's cloud computing needs. A typical
datacenter is made up of thousands of servers connected with a large network
and usually managed by one operator. To provide quality access to the variety
of applications and services hosted on datacenters and maximize performance, it
deems necessary to use datacenter networks effectively and efficiently.
Datacenter traffic is often a mix of several classes with different priorities
and requirements. This includes user-generated interactive traffic, traffic
with deadlines, and long-running traffic. To this end, custom transport
protocols and traffic management techniques have been developed to improve
datacenter network performance.
In this tutorial paper, we review the general architecture of datacenter
networks, various topologies proposed for them, their traffic properties,
general traffic control challenges in datacenters and general traffic control
objectives. The purpose of this paper is to bring out the important
characteristics of traffic control in datacenters and not to survey all
existing solutions (as it is virtually impossible due to massive body of
existing research). We hope to provide readers with a wide range of options and
factors while considering a variety of traffic control mechanisms. We discuss
various characteristics of datacenter traffic control including management
schemes, transmission control, traffic shaping, prioritization, load balancing,
multipathing, and traffic scheduling. Next, we point to several open challenges
as well as new and interesting networking paradigms. At the end of this paper,
we briefly review inter-datacenter networks that connect geographically
dispersed datacenters which have been receiving increasing attention recently
and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial
The Database Query Support Processor (QSP)
The number and diversity of databases available to users continues to increase dramatically. Currently, the trend is towards decentralized, client server architectures that (on the surface) are less expensive to acquire, operate, and maintain than information architectures based on centralized, monolithic mainframes. The database query support processor (QSP) effort evaluates the performance of a network level, heterogeneous database access capability. Air Force Material Command's Rome Laboratory has developed an approach, based on ANSI standard X3.138 - 1988, 'The Information Resource Dictionary System (IRDS)' to seamless access to heterogeneous databases based on extensions to data dictionary technology. To successfully query a decentralized information system, users must know what data are available from which source, or have the knowledge and system privileges necessary to find out this information. Privacy and security considerations prohibit free and open access to every information system in every network. Even in completely open systems, time required to locate relevant data (in systems of any appreciable size) would be better spent analyzing the data, assuming the original question was not forgotten. Extensions to data dictionary technology have the potential to more fully automate the search and retrieval for relevant data in a decentralized environment. Substantial amounts of time and money could be saved by not having to teach users what data resides in which systems and how to access each of those systems. Information describing data and how to get it could be removed from the application and placed in a dedicated repository where it belongs. The result simplified applications that are less brittle and less expensive to build and maintain. Software technology providing the required functionality is off the shelf. The key difficulty is in defining the metadata required to support the process. The database query support processor effort will provide quantitative data on the amount of effort required to implement an extended data dictionary at the network level, add new systems, adapt to changing user needs, and provide sound estimates on operations and maintenance costs and savings
IT and Multi-layer Online Resource Allocation and Offline Planning in Metropolitan Networks
Metropolitan networks are undergoing a major technological breakthrough leveraging the capabilities of software-defined networking (SDN) and network function virtualization (NFV). NFV permits the deployment of virtualized network functions (VNFs) on commodity hardware appliances which can be combined with SDN flexibility and programmability of the network infrastructure. SDN/NFV-enabled networks require decision-making in two time scales: short-term online resource allocation and mid-to-long term offline planning. In this paper, we first tackle the dimensioning of SDN/NFV-enabled metropolitan networks paying special attention to the role that latency plays in the capacity planning. We focus on a specific use-case: the metropolitan network that covers the Murcia - Alicante Spanish regions. Then, we propose a latency-aware multilayer service-chain allocation (LA-ML-SCA) algorithm to explore a range of maximum latency requirements and their impact on the resources for dimensioning the metropolitan network. We observe that design costs increase for low latency requirements as more data center facilities need to be spread to get closer to the network edge, reducing the economies of scale on the IT infrastructure. Subsequently, we review our recent joint computation of multi-site VNF placement and multilayer resource allocation in the deployment of a network service in a metro network. Specifically, a set of subroutines contained in LA-ML-SCA are experimentally validated in a network optimization-as-a-service architecture that assists an Open-Source MANO instance, virtual infrastructure managers and WAN controllers in a metro network test-bed.Grant numbers : Go2Edge - Engineering Future Edge Computing Networks, Systems and Services.@ 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
Recommended from our members
Integration of unidirectional technologies into wireless back-haul architecture
This thesis was submitted for the degree of Docter of Philosophy and awarded by Brunel University.Back-haul infrastructures of today's wireless operators must support the triple-play services demanded by the market or regulatory bodies. To cope with increasing capacity demand, the EU FP7 project CARMEN has developed a cost-effective heterogeneous
multi-radio wireless back-haul architecture, which may also leverage the native multicast
capabilities of broadcast technologies such as DVB-T to off-load high-bandwidth broadcast
content delivery. However, the integration of such unidirectional technologies into a packet-switched architecture requires careful considerations. The contribution of this thesis is the investigation, design and evaluation of protocols and mechanisms facilitating the integration of such unidirectional technologies into the wireless
back-haul architecture so that they can be configured and utilized by the spectrum and
capacity optimization modules. This integration mainly concerns the control plane and, in particular, the aspects related to resource and capability descriptions, neighborhood, link and Multi Protocol Label Switching (MPLS) Label-Switched Path (LSP) monitoring, unicast and multicast LSP signalling as well as topology forming and maintenance. During the course of this study we have analyzed the problem space, proposed solutions to the resulting research questions and evaluated our approach. Our results show that the now Unidirectional Technology (UDT)-aware architecture can readily consider
Unidirectional Technologies (UDTs) to distribute, for example, broadcast content
Understanding Internet topology: principles, models, and validation
Building on a recent effort that combines a first-principles approach to modeling router-level connectivity with a more pragmatic use of statistics and graph theory, we show in this paper that for the Internet, an improved understanding of its physical infrastructure is possible by viewing the physical connectivity as an annotated graph that delivers raw connectivity and bandwidth to the upper layers in the TCP/IP protocol stack, subject to practical constraints (e.g., router technology) and economic considerations (e.g., link costs). More importantly, by relying on data from Abilene, a Tier-1 ISP, and the Rocketfuel project, we provide empirical evidence in support of the proposed approach and its consistency with networking reality. To illustrate its utility, we: 1) show that our approach provides insight into the origin of high variability in measured or inferred router-level maps; 2) demonstrate that it easily accommodates the incorporation of additional objectives of network design (e.g., robustness to router failure); and 3) discuss how it complements ongoing community efforts to reverse-engineer the Internet
An integrated transport solution to big data movement in high-performance networks
Extreme-scale e-Science applications in various domains such as earth science and high energy physics among multiple national institutions within the U.S. are generating colossal amounts of data, now frequently termed as “big data”. The big data must be stored, managed and moved to different geographical locations for distributed data processing and analysis. Such big data transfers require stable and high-speed network connections, which are not readily available in traditional shared IP networks such as the Internet. High-performance networking technologies and services featuring high bandwidth and advance reservation are being rapidly developed and deployed across the nation and around the globe to support such scientific applications. However, these networking technologies and services have not been fully utilized, mainly because: i) the use of these technologies and services often requires considerable domain knowledge and many application users are even not aware of their existence; and ii) the end-to-end data transfer performance largely depends on the transport protocol being used on the end hosts. The high-speed network path with reserved bandwidth in High-performance Networks has shifted the data transfer bottleneck from network segments in traditional IP networks to end hosts, which most existing transport protocols are not well suited to handle.
In this dissertation, an integrated transport solution is proposed in support of data- and network-intensive applications in various science domains. This solution integrates three major components, i.e., i) transport-support workflow optimization, ii) transport profile generation, and iii) transport protocol design, into a unified framework. Firstly, a class of transport-support workflow optimization problems are formulated, where an appropriate set of resources and services are selected to compose the best transport-support workflow to meet user’s data transfer request in terms of various performance requirements. Secondly, a transport profiler named Transport Profile Generator (TPG) and its extended and accelerated version named FastProf are designed and implemented to characterize and enhance the end-to-end data transfer performance of a selected transport method over an established network path. Finally, several approaches based on rate and error threshold control are proposed to design a suite of data transfer protocols specifically tailored for big data transfer over dedicated connections. The proposed integrated transport solution is implemented and evaluated in: i) a local testbed with a single 10 Gb/s back-to-back connection and dual 10 Gb/s NIC-to-NIC connections; and ii) several wide-area networks with 10 Gb/s long-haul connections at collaborative sites including Oak Ridge National Laboratory, Argonne National Laboratory, and University of Chicago
Beyond 5G Domainless Network Operation enabled by Multiband: Toward Optical Continuum Architectures
Both public and private innovation projects are targeting the design,
prototyping and demonstration of a novel end-to-end integrated packet-optical
transport architecture based on Multi-Band (MB) optical transmission and
switching networks. Essentially, MB is expected to be the next technological
evolution to deal with the traffic demand and service requirements of 5G mobile
networks, and beyond, in the most cost-effective manner. Thanks to MB
transmission, classical telco architectures segmented into hierarchical levels
and domains can move forward toward an optical network continuum, where edge
access nodes are all-optically interconnected with top-hierarchical nodes,
interfacing Content Delivery Networks (CDN) and Internet Exchange Points (IXP).
This article overviews the technological challenges and innovation requirements
to enable such an architectural shift of telco networks both from a data and
control and management planes
- …