3,346 research outputs found

    Diluting the Scalability Boundaries: Exploring the Use of Disaggregated Architectures for High-Level Network Data Analysis

    Get PDF
    Traditional data centers are designed with a rigid architecture of fit-for-purpose servers that provision resources beyond the average workload in order to deal with occasional peaks of data. Heterogeneous data centers are pushing towards more cost-efficient architectures with better resource provisioning. In this paper we study the feasibility of using disaggregated architectures for intensive data applications, in contrast to the monolithic approach of server-oriented architectures. Particularly, we have tested a proactive network analysis system in which the workload demands are highly variable. In the context of the dReDBox disaggregated architecture, the results show that the overhead caused by using remote memory resources is significant, between 66\% and 80\%, but we have also observed that the memory usage is one order of magnitude higher for the stress case with respect to average workloads. Therefore, dimensioning memory for the worst case in conventional systems will result in a notable waste of resources. Finally, we found that, for the selected use case, parallelism is limited by memory. Therefore, using a disaggregated architecture will allow for increased parallelism, which, at the same time, will mitigate the overhead caused by remote memory.Comment: 8 pages, 6 figures, 2 tables, 32 references. Pre-print. The paper will be presented during the IEEE International Conference on High Performance Computing and Communications in Bangkok, Thailand. 18 - 20 December, 2017. To be published in the conference proceeding

    Exploiting Properties of CMP Cache Traffic in Designing Hybrid Packet/Circuit Switched NoCs

    Get PDF
    Chip multiprocessors with few to tens of processing cores are already commercially available. Increased scaling of technology is making it feasible to integrate even more cores on a single chip. Providing the cores with fast access to data is vital to overall system performance. When a core requires access to a piece of data, the core's private cache memory is searched first. If a miss occurs, the data is looked up in the next level(s) of the memory hierarchy, where often one or more levels of cache are shared between two or more cores. Communication between the cores and the slices of the on-chip shared cache is carried through the network-on-chip(NoC). Interestingly, the cache and NoC mutually affect the operation of each other; communication over the NoC affects the access latency of cache data, while the cache organization generates the coherence and data messages, thus affecting the communication patterns and latency over the NoC. This thesis considers hybrid packet/circuit switched NoCs, i.e., packet switched NoCs enhanced with the ability to configure circuits. The communication and performance benefit that come from using circuits is predicated on amortizing the time cost incurred for configuring the circuits. To address this challenge, NoC designs are proposed that take advantage of properties of the cache traffic, namely temporal locality and predictability, to amortize or hide the circuit configuration time cost. First, a coarse-grained circuit configuration policy is proposed that exploits the temporal locality in the cache traffic to periodically configure circuits for the heavily communicating nodes. This allows the design of a locality-aware cache that promotes temporal communication locality through data placement, while designing suitable data replacement and migration policies. Next, a fine-grained configuration policy, called Déjà Vu switching, is proposed for leveraging predictability of data messages by initiating a circuit configuration as soon as a cache hit is detected and before the data becomes available. Its benefit is demonstrated for saving interconnect energy in multi-plane NoCs. Finally, a more proactive configuration policy is proposed for fast caches, where circuit reservations are initiated by request messages, which can greatly improve communication latency and system performance

    Improving Routing Efficiency, Fairness, Differentiated Servises And Throughput In Optical Networks

    Get PDF
    Wavelength division multiplexed (WDM) optical networks are rapidly becoming the technology of choice in next-generation Internet architectures. This dissertation addresses the important issues of improving four aspects of optical networks, namely, routing efficiency, fairness, differentiated quality of service (QoS) and throughput. A new approach for implementing efficient routing and wavelength assignment in WDM networks is proposed and evaluated. In this approach, the state of a multiple-fiber link is represented by a compact bitmap computed as the logical union of the bitmaps of the free wavelengths in the fibers of this link. A modified Dijkstra\u27s shortest path algorithm and a wavelength assignment algorithm are developed using fast logical operations on the bitmap representation. In optical burst switched (OBS) networks, the burst dropping probability increases as the number of hops in the lightpath of the burst increases. Two schemes are proposed and evaluated to alleviate this unfairness. The two schemes have simple logic, and alleviate the beat-down unfairness problem without negatively impacting the overall throughput of the system. Two similar schemes to provide differentiated services in OBS networks are introduced. A new scheme to improve the fairness of OBS networks based on burst preemption is presented. The scheme uses carefully designed constraints to avoid excessive wasted channel reservations, reduce cascaded useless preemptions, and maintain healthy throughput levels. A new scheme to improve the throughput of OBS networks based on burst preemption is presented. An analytical model is developed to compute the throughput of the network for the special case when the network has a ring topology and the preemption weight is based solely on burst size. The analytical model is quite accurate and gives results close to those obtained by simulation. Finally, a preemption-based scheme for the concurrent improvement of throughput and burst fairness in OBS networks is proposed and evaluated. The scheme uses a preemption weight consisting of two terms: the first term is a function of the size of the burst and the second term is the product of the hop count times the length of the lightpath of the burst

    A Survey on the Contributions of Software-Defined Networking to Traffic Engineering

    Get PDF
    Since the appearance of OpenFlow back in 2008, software-defined networking (SDN) has gained momentum. Although there are some discrepancies between the standards developing organizations working with SDN about what SDN is and how it is defined, they all outline traffic engineering (TE) as a key application. One of the most common objectives of TE is the congestion minimization, where techniques such as traffic splitting among multiple paths or advanced reservation systems are used. In such a scenario, this manuscript surveys the role of a comprehensive list of SDN protocols in TE solutions, in order to assess how these protocols can benefit TE. The SDN protocols have been categorized using the SDN architecture proposed by the open networking foundation, which differentiates among data-controller plane interfaces, application-controller plane interfaces, and management interfaces, in order to state how the interface type in which they operate influences TE. In addition, the impact of the SDN protocols on TE has been evaluated by comparing them with the path computation element (PCE)-based architecture. The PCE-based architecture has been selected to measure the impact of SDN on TE because it is the most novel TE architecture until the date, and because it already defines a set of metrics to measure the performance of TE solutions. We conclude that using the three types of interfaces simultaneously will result in more powerful and enhanced TE solutions, since they benefit TE in complementary ways.European Commission through the Horizon 2020 Research and Innovation Programme (GN4) under Grant 691567 Spanish Ministry of Economy and Competitiveness under the Secure Deployment of Services Over SDN and NFV-based Networks Project S&NSEC under Grant TEC2013-47960-C4-3-

    Software Defined Networking:Applicability and Service Possibilities

    Get PDF

    Dynamic Virtual Network Reconfiguration Over SDN Orchestrated Multitechnology Optical Transport Domains

    Get PDF
    Network virtualization is an emerging technique that enables multiple tenants to share an underlying physical infrastructure, isolating the traffic running over different virtual infrastructures/tenants. This technique aims to improve network utilization, while reducing the complexities in terms of network management for operators. Applied to this context, software defined networking (SDN) paradigm can ease network configurations by enabling network programmability and automation, which reduces the amount of operations required from both service and infrastructure providers. SDN techniques are decreasing vendor lock-in issues due to specific configuration methods or protocols. Application-based Network Operations (ABNO) is a toolbox of key network functional components with the goal of offering application-driven network management. Service provisioning using ABNO may involve direct configuration of data plane elements or delegate it to several control plane modules. We validate the applicability of ABNO to multi-tenant virtual networks in multi-technology optical domains based on two scenarios, in which multiple control plane instances are orchestrated by the architecture. Congestion Detection and Failure Recovery, are chosen to demonstrate fast recalculation and reconfiguration, while hiding the configurations in the physical layer from the upper layer.Grant numbers : supported by the Spanish Ministry of Economy and Competitiveness through the project FARO (TEC2012-38119)
    • …
    corecore