31,997 research outputs found

    Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure

    Get PDF
    This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version

    Throughput Optimal On-Line Algorithms for Advanced Resource Reservation in Ultra High-Speed Networks

    Full text link
    Advanced channel reservation is emerging as an important feature of ultra high-speed networks requiring the transfer of large files. Applications include scientific data transfers and database backup. In this paper, we present two new, on-line algorithms for advanced reservation, called BatchAll and BatchLim, that are guaranteed to achieve optimal throughput performance, based on multi-commodity flow arguments. Both algorithms are shown to have polynomial-time complexity and provable bounds on the maximum delay for 1+epsilon bandwidth augmented networks. The BatchLim algorithm returns the completion time of a connection immediately as a request is placed, but at the expense of a slightly looser competitive ratio than that of BatchAll. We also present a simple approach that limits the number of parallel paths used by the algorithms while provably bounding the maximum reduction factor in the transmission throughput. We show that, although the number of different paths can be exponentially large, the actual number of paths needed to approximate the flow is quite small and proportional to the number of edges in the network. Simulations for a number of topologies show that, in practice, 3 to 5 parallel paths are sufficient to achieve close to optimal performance. The performance of the competitive algorithms are also compared to a greedy benchmark, both through analysis and simulation.Comment: 9 pages, 8 figure

    On-board B-ISDN fast packet switching architectures. Phase 2: Development. Proof-of-concept architecture definition report

    Get PDF
    For the next-generation packet switched communications satellite system with onboard processing and spot-beam operation, a reliable onboard fast packet switch is essential to route packets from different uplink beams to different downlink beams. The rapid emergence of point-to-point services such as video distribution, and the large demand for video conference, distributed data processing, and network management makes the multicast function essential to a fast packet switch (FPS). The satellite's inherent broadcast features gives the satellite network an advantage over the terrestrial network in providing multicast services. This report evaluates alternate multicast FPS architectures for onboard baseband switching applications and selects a candidate for subsequent breadboard development. Architecture evaluation and selection will be based on the study performed in phase 1, 'Onboard B-ISDN Fast Packet Switching Architectures', and other switch architectures which have become commercially available as large scale integration (LSI) devices

    A Survey on Load Balancing Algorithms for VM Placement in Cloud Computing

    Get PDF
    The emergence of cloud computing based on virtualization technologies brings huge opportunities to host virtual resource at low cost without the need of owning any infrastructure. Virtualization technologies enable users to acquire, configure and be charged on pay-per-use basis. However, Cloud data centers mostly comprise heterogeneous commodity servers hosting multiple virtual machines (VMs) with potential various specifications and fluctuating resource usages, which may cause imbalanced resource utilization within servers that may lead to performance degradation and service level agreements (SLAs) violations. To achieve efficient scheduling, these challenges should be addressed and solved by using load balancing strategies, which have been proved to be NP-hard problem. From multiple perspectives, this work identifies the challenges and analyzes existing algorithms for allocating VMs to PMs in infrastructure Clouds, especially focuses on load balancing. A detailed classification targeting load balancing algorithms for VM placement in cloud data centers is investigated and the surveyed algorithms are classified according to the classification. The goal of this paper is to provide a comprehensive and comparative understanding of existing literature and aid researchers by providing an insight for potential future enhancements.Comment: 22 Pages, 4 Figures, 4 Tables, in pres

    Virtual lines, a deadlock free and real-time routing mechanism for ATM networks

    Get PDF
    In this paper we present a routing mechanism and buffer allocation mechanism for an ATM switching fabric. Since the fabric will be used to transfer multimedia traffic it should provide a guaranteed throughput and a bounded latency. We focus on the design of a suitable routing mechanism that is capable to fulfil these requirements and is free of deadlocks. We will describe two basic concepts that can be used to implement deadlock free routing. Routing of messages is closely related to buffering. We have organized the buffers into parallel fifos, each representing a virtual line. In this way we not only have solved the problem of Head Of Line blocking, but we can also give real-time guarantees. We will show that for local high-speed networks it is more advantageous to have a proper flow control than to have large buffers. Although the virtual line concept can have a low buffer utilization, the transfer efficiency can be higher. The virtual lines concept allows adaptive routing. The total throughput of the network can be improved by using alternative routes. Adaptive routing is attractive in networks where alternative routes are not much longer than the initial route(s). The network of the switching fabric is built up from switching elements interconnected in a Kautz topology

    Agent-based macroeconomics - a baseline model

    Get PDF
    This paper develops a baseline agent-based macroeconomic model and contrasts it with the common dynamic stochastic general equilibrium approach. Although simple, the model can reproduce a lot of the stylized facts of business cycles. The author argues that agent-based modeling is an adequate response to the recently expressed criticism of macroeconomic methodology. It does not depend on the strict assumption of rationality and allows for aggregate behavior that is more than simply a replication of microeconomic optimization decisions. At the same time it allows for absolutely consistent micro foundations. Most importantly, it does not depend on equilibrium assumptions or fictitious auctioneers and does therefore not rule out coordination failures, instability and crisis by definition. --agent-based modeling,complex adaptive systems,microfoundations of macroeconomics

    Virtual lines, a deadlock-free and real-time routing mechanism for ATM networks

    Get PDF
    In this paper, we present a routing mechanism and buffer allocation mechanism for an ATM switching fabric. Since the fabric will be used to transfer multimedia traffic, it should provide a guaranteed throughput and a bounded latency. We focus on the design of a suitable routing mechanism that is capable of fulfilling these requirements and is free of deadlocks. We will describe two basic concepts that can be used to implement deadlock-free routing. Routing of messages is closely related to buffering. We have organized the buffers into parallel FIFO's, each representing a virtual line. In this way, we not only have solved the problem of head of line blocking, but we can also give real-time guarantees. We will show that for local high-speed networks, it is more advantageous to have a proper flow control than to have large buffers. Although the virtual line concept can have a low buffer utilization, the transfer efficiency can be higher. The virtual line concept allows adaptive routing. The total throughput of the network can be improved by using alternative routes. Adaptive routing is attractive in networks where alternative routes are not much longer than the initial route(s). The network of the switching fabric is built up from switching elements interconnected in a Kautz topology
    • 

    corecore