1,219 research outputs found

    THE IDENTIFICATION OF MAJOR FACTORS IN THE DEPLOYMENT OF A SCIENCE DMZ AT SMALL INSTITUTIONS

    Get PDF
    The Science DMZ is a network research tool offering superior large science data transmission between two locations. Through a network design that places the Science DMZ at the edge of the campus network, the Science DMZ defines a network path that avoids packet inspecting devices (firewalls, packet shapers) and produces near line-rate transmission results for large data sets between institutions. Small institutions of higher education (public and private small colleges) seeking to participate in data exchange with other institutions are inhibited in the construction of Science DMZs due to the high costs of deployment. While the National Science Foundation made 18 awards in the Campus Cyberinfrastructure program to investigate the designs, methods, costs, and results of deploying Science DMZs at small institutions, there lacks a cohort view of the success factors and options that must be considered in developing the most impactful solution for any given small institution environment. This research examined the decisions and results of the 18 NSF Science DMZ projects, recording a series of major factors in the small institution deployments, and establishing the Science DMZ Capital Framework (SCF), a model to be considered prior to starting a small institution Science DMZ project

    A multidisciplinary approach to the development of low-cost high-performance lightwave networks

    Get PDF
    Our research focuses on high-speed distributed systems. We anticipate that our results will allow the fabrication of low-cost networks employing multi-gigabit-per-second data links for space and military applications. The recent development of high-speed low-cost photonic components and new generations of microprocessors creates an opportunity to develop advanced large-scale distributed information systems. These systems currently involve hundreds of thousands of nodes and are made up of components and communications links that may fail during operation. In order to realize these systems, research is needed into technologies that foster adaptability and scaleability. Self-organizing mechanisms are needed to integrate a working fabric of large-scale distributed systems. The challenge is to fuse theory, technology, and development methodologies to construct a cost-effective, efficient, large-scale system

    A Load-Based Multihop Routing Using Whitebox Routers in Backbone Network

    Get PDF
    White box routers are multi-hop networks. Clients and routers form this kind of network, with the routers serving as its backbone and including many radio interfaces. It is critical to distribute the load uniformly and prevent interference for Wireless Mesh Network (WMNs) to be able to provide backbone support. As a routing system for multi-radio backbone networks, we introduce Load-based Multihop-White Box Routing (L-MWBR). This protocol accounts for variations in transmit rates, packet loss ratios, intra- and interflow interference, and traffic volumes. In multi-radio networks, the L-MWBR measure is useful for locating pathways that are superior in terms of load balancing and lowering the amount of interference that occurs between multiple flows and within individual flows. There are various current routing measures in multi-radio backbone networks, and the results of the simulation reveal that the L-MWBR method works substantially better than other metrics

    Power Saving Strategies and Technologies in Network Equipment Opportunities and Challenges, Risk and Rewards

    Full text link
    Drawing from todays best-in-class solutions, we identify power-saving strategies that have succeeded in the past and look forward to new ideas and paradigms. We strongly believe that designing energy-efficient network equipment can be compared to building sports cars, task-oriented, focused and fast. However, unlike track-bound sports cars, ultra-fast and purpose-built silicon yields better energy efficiency when compared to more generic family sedan designs that mitigate go-to-market risks by being the masters of many tasks. Thus, we demonstrate that the best opportunities for power savings come via protocol simplification, best-of-breed technology, and silicon and software optimization, to achieve the least amount of processing necessary to move packets. We also look to the future of networking from a new angle, where energy efficiency and environmental concerns are viewed as fundamental design criteria and forces that need to be harnessed to continually create more powerful networking equipment.Comment: IEEE SAINT 2008 proceedings, July 28th - Aug 1st 2008, PCFNS worksho

    Container network functions: bringing NFV to the network edge

    Get PDF
    In order to cope with the increasing network utilization driven by new mobile clients, and to satisfy demand for new network services and performance guarantees, telecommunication service providers are exploiting virtualization over their network by implementing network services in virtual machines, decoupled from legacy hardware accelerated appliances. This effort, known as NFV, reduces OPEX and provides new business opportunities. At the same time, next generation mobile, enterprise, and IoT networks are introducing the concept of computing capabilities being pushed at the network edge, in close proximity of the users. However, the heavy footprint of today's NFV platforms prevents them from operating at the network edge. In this article, we identify the opportunities of virtualization at the network edge and present Glasgow Network Functions (GNF), a container-based NFV platform that runs and orchestrates lightweight container VNFs, saving core network utilization and providing lower latency. Finally, we demonstrate three useful examples of the platform: IoT DDoS remediation, on-demand troubleshooting for telco networks, and supporting roaming of network functions

    Distributed Software Router Management

    Get PDF
    With the stunning success of the Internet, information and communication technologies diffused increasingly attracting more uses to join the the Internet arsenal which in turn accelerates the traffic growth. This growth rate does not seem to slow down in near future. Networking devices support these traffic growth by offering an ever increasing transmission and switching speed, mostly due to the technological advancement of microelectronics granted by Moore’s Law. However, the comparable growth rate of the Internet and electronic devices suggest that capacity of systems will become a crucial factor in the years ahead. Besides the growth rate challenge that electronic devices face with respect to traffic growth, networking devices have always been characterized by the development of proprietary architectures. This means that incompatible equipment and architectures, especially in terms of configuration and management procedures. The major drawback of such industrial practice, however, is that the devices lack flexibility and programmability which is one of the source of ossification for today’s Internet. Thus scaling or modifying networking devices, particularly routers, for a desired function requires a flexible and programmable devices. Software routers (SRs) based on personal computers (PCs) are among these devices that satisfy the flexibility and programmability criteria. Furthermore, the availability of large number of open-source software for networking applications both for data as well as control plane and the low cost PCs driven by PC-market economy scale make software routers appealing alternative to expensive proprietary networking devices. That is, while software routers have the advantage of being flexible, programmable and low cost, proprietary networking equipments are usually expensive, difficult to extend, program, or otherwise experiment with because they rely on specialized and closed hardware and software. Despite their advantages, however, software routers are not without limitation. The objections to software routers include limited performance, scalability problems and lack of advanced functionality. These limitations arose from the fact that a single server limited by PCI bus width and CPU is given a responsibility to process large amount of packets. Offloading some packet processing tasks performed by the CPU to other processors, such as GPUs of the same PC or external CPUs, is a viable approach to overcome some of these limitations. In line with this, a distributed Multi-Stage Software Router (MSSR) architecture has been proposed in order to overcome both the performance and scalability issues of single PC based software routers. The architecture has three stages: i) a front-end layer-2 load balancers (LBs), open-software or open-hardware based, that act as interfaces to the external networks and distribute IP packets to ii) back-end personal computers (BEPCs), also named back-end routers in this thesis, that provide IP routing functionality, and iii) an interconnection network, based on Ethernet switches, that connects the two stages. Performance scaling of the architecture is achieved by increasing the redundancy of the routing functionality stage where multiple servers are given a coordinated task of routing packets. The scalability problem related to number of interfaces per PC is also tackled in MSSR by bundling two or more PCs’ interfaces through a switch at the front-end stage. The overall architecture is controlled and managed by a control entity named Virtual Control Processor (virtualCP), which runs on a selected back-end router, through a DIST protocol. This entity is also responsible to hide the internal details of the multistage software router architecture such that the whole architecture appear to external network devices as a single device. However, building a flexible and scalable high-performance MSSR architecture requires large number of independently, but coordinately, running internal components. As the number of internal devices increase so does the architecture control and management complexity. In addition, redundant components to scale performance means power wastage at low loads. These challenges have to be addressed in making the multistage software router a functional and competent network device. Consequently, the contribution of this thesis is to develop an MSSR centralized management system that deals with these challenges. The management system has two broadly classified sub-systems: I) power management: a module responsible to address the energy inefficiency in multistage software router architecture II) unified information management: a module responsible to create a unified management information base such that the distributed multistage router architecture appears as a single device to external network from management information perspective. The distributed multistage router power management module tries to minimize the energy consumption of the architecture by resizing the architecture to the traffic demand. During low load periods only few components, especially that of routing functionality stage, are required to readily give a service. Thus it is wise to device a mechanism that puts idle components to low power mode to save energy during low load periods. In this thesis an optimal and two heuristic algorithms, namely on-line and off-line, are proposed to adapt the architecture to an input load demand. We demonstrate that the optimal algorithm, besides having scalability issue, is an off-line approach that introduce service disruption and delay during the architecture reconfiguration period. In solving these issues, heuristic solutions are proposed and their performance is measured against the optimal solution. Results show that the algorithms fairly approximate the optimal solution and use of these algorithms save up to 57.44% of the total architecture energy consumption during low load periods. The on-line algorithms are superior among the heuristic solutions as it has the advantage of being less disruptive and has minimal service delay. Furthermore, the thesis shows that the proposed algorithms will be more efficient if the architecture is designed keeping in mind energy as one of the design parameter. In achieving this goal three different approaches to design an MSSR architecture are proposed and their energy saving efficient is evaluated both with respect to the optimal solution and other similar cluster design approaches. The multistage software router is unique from a single device as it is composed of independently running components. This means that the MSSR management information is distributed in the architecture since individual components register their own management information. It is said, however, that the MSSR internal devices work cooperatively to appear as a single network device to the external network. The MSSR architecture, as a single device, therefore requires its own management information base which is built from the management information bases dispersed among internal components. This thesis proposes a mechanism to collect and organize this distributed management information and create a single management information base representing the whole architecture. Accordingly existing SNMP management communication model has been modified to fit to distributed multi-stage router architecture and a possible management architecture is proposed. In compiling the management information, different schemes has been adopted to deal with different SNMP management information variables. Scalability analysis shows that proposed management system scales well and does not pose a threat to the overall architecture scalability

    Doctor of Philosophy

    Get PDF
    dissertationAs the base of the software stack, system-level software is expected to provide ecient and scalable storage, communication, security and resource management functionalities. However, there are many computationally expensive functionalities at the system level, such as encryption, packet inspection, and error correction. All of these require substantial computing power. What's more, today's application workloads have entered gigabyte and terabyte scales, which demand even more computing power. To solve the rapidly increased computing power demand at the system level, this dissertation proposes using parallel graphics pro- cessing units (GPUs) in system software. GPUs excel at parallel computing, and also have a much faster development trend in parallel performance than central processing units (CPUs). However, system-level software has been originally designed to be latency-oriented. GPUs are designed for long-running computation and large-scale data processing, which are throughput-oriented. Such mismatch makes it dicult to t the system-level software with the GPUs. This dissertation presents generic principles of system-level GPU computing developed during the process of creating our two general frameworks for integrating GPU computing in storage and network packet processing. The principles are generic design techniques and abstractions to deal with common system-level GPU computing challenges. Those principles have been evaluated in concrete cases including storage and network packet processing applications that have been augmented with GPU computing. The signicant performance improvement found in the evaluation shows the eectiveness and eciency of the proposed techniques and abstractions. This dissertation also presents a literature survey of the relatively young system-level GPU computing area, to introduce the state of the art in both applications and techniques, and also their future potentials
    • …
    corecore