1,559 research outputs found

    Assessing load-sharing within optimistic simulation platforms

    Get PDF
    The advent of multi-core machines has lead to the need for revising the architecture of modern simulation platforms. One recent proposal we made attempted to explore the viability of load-sharing for optimistic simulators run on top of these types of machines. In this article, we provide an extensive experimental study for an assessment of the effects on run-time dynamics by a load-sharing architecture that has been implemented within the ROOT-Sim package, namely an open source simulation platform adhering to the optimistic synchronization paradigm. This experimental study is essentially aimed at evaluating possible sources of overheads when supporting load-sharing. It has been based on differentiated workloads allowing us to generate different execution profiles in terms of, e.g., granularity/locality of the simulation events. © 2012 IEEE

    Scalable RDMA performance in PGAS languages

    Get PDF
    Partitioned global address space (PGAS) languages provide a unique programming model that can span shared-memory multiprocessor (SMP) architectures, distributed memory machines, or cluster ofSMPs. Users can program large scale machines with easy-to-use, shared memory paradigms. In order to exploit large scale machines efficiently, PGAS language implementations and their runtime system must be designed for scalability and performance. The IBM XLUPC compiler and runtime system provide a scalable design through the use of the shared variable directory (SVD). The SVD stores meta-information needed to access shared data. It is dereferenced, in the worst case, for every shared memory access, thus exposing a potential performance problem. In this paper we present a cache of remote addresses as an optimization that will reduce the SVD access overhead and allow the exploitation of native (remote) direct memory accesses. It results in a significant performance improvement while maintaining the run-time portability and scalability.Postprint (published version

    Leveraging Programmable Data Plane For Compressing Forwarding Tables

    Get PDF
    The Forwarding Information Base (FIB) resides in the data plane of a routing device and is used to forward packets to a next-hop, based on packets\u27 destination IP addresses. The constant growth of a FIB forces network operators to spend more resources on maintaining memory with line-rate Longest Prefix Match (LPM) lookup in a FIB, namely, expensive and energy-hungry Ternary Content-Addressable Memory (TCAM) chips. In this work, we review two different approaches used to mitigate the FIB overflow problem. First, we investigate FIB aggregation, i.e., merging adjacent or overlapping routes with the same next-hop while preserving the forwarding behavior of a FIB. We propose a near-optimal algorithm, FIB Aggregation with Quick Selections (FAQS), that minimizes the FIB churn and speeds BGP update processing by more than twice. In the meantime, FAQS preserves a high compression ratio (at most 73\%). FAQS handles BGP updates incrementally, without the need of re-aggregating the entire FIB table. Second, we investigate FIB (or route) caching, when TCAM holds only a portion of a FIB that carries most of the traffic. We leverage the emerging concept of the programmable data plane to propose a Programmable FIB Caching Architecture (PFCA), that allows cache-victim selection at the line rate and significantly reduces the FIB churn compared to FIB aggregation. PFCA achieves 99.8% cache-hit ratio with only 3.3\% of the FIB placed in a FIB cache. Finally, we extend PFCA\u27s design with a novel approach of integrating incremental FIB aggregation and FIB caching. Such integration needed to overcome cache hiding challenge when a less specific prefix in a cache hides a more specific prefix in a secondary FIB table, which leads to incorrect LPM matching at the cache. In Combined FIB Caching and Aggregation (CFCA), cache-hit ratio is maximized up to 99.94% with only 2.5\% entries of the FIB, while the total number of route changes in TCAM is reduced by more than 40\% compared to low-churn FIB aggregation techniques

    Analysis of Multi-Threading and Cache Memory Latency Masking on Processor Performance Using Thread Synchronization Technique

    Get PDF
    Multithreading is a process in which a single processor executes multiple threads concurrently. This enables the processor to divide tasks into separate threads and run them simultaneously, thereby increasing the utilization of available system resources and enhancing performance. When multiple threads share an object and one or more of them modify it, unpredictable outcomes may occur. Threads that exhibit poor locality of memory reference, such as database applications, often experience delays while waiting for a response from the memory hierarchy. This observation suggests how to better manage pipeline contention. To assess the impact of memory latency on processor performance, a dual-core MT machine with four thread contexts per core is utilized. These specific benchmarks are chosen to allow the workload to include programs with both favorable and unfavorable cache locality. To eliminate the issue of wasting the wake-up signals, this work proposes an approach that involves storing all the wake-up calls. It asserts the wake-up calls to the consumer and the producer can store the wake-up call in a variable.   An assigned value in working system (or kernel) storage that each process can check is a semaphore. Semaphore is a variable that reads, and update operations automatically in bit mode. It cannot be actualized in client mode since a race condition may persistently develop when two or more processors endeavor to induce to the variable at the same time. This study includes code to measure the time taken to execute both functions and plot the graph. It should be noted that sending multiple requests to a website simultaneously could trigger a flag, ultimately blocking access to the data. This necessitates some computation on the collected statistics. The execution time is reduced to one third when using threads compared to executing the functions sequentially. This exemplifies the power of multithreading
    • …
    corecore