181,746 research outputs found

    A Detailed Analysis of Contemporary ARM and x86 Architectures

    Get PDF
    RISC vs. CISC wars raged in the 1980s when chip area and processor design complexity were the primary constraints and desktops and servers exclusively dominated the computing landscape. Today, energy and power are the primary design constraints and the computing landscape is significantly different: growth in tablets and smartphones running ARM (a RISC ISA) is surpassing that of desktops and laptops running x86 (a CISC ISA). Further, the traditionally low-power ARM ISA is entering the high-performance server market, while the traditionally high-performance x86 ISA is entering the mobile low-power device market. Thus, the question of whether ISA plays an intrinsic role in performance or energy efficiency is becoming important, and we seek to answer this question through a detailed measurement based study on real hardware running real applications. We analyze measurements on the ARM Cortex-A8 and Cortex-A9 and Intel Atom and Sandybridge i7 microprocessors over workloads spanning mobile, desktop, and server computing. Our methodical investigation demonstrates the role of ISA in modern microprocessors? performance and energy efficiency. We find that ARM and x86 processors are simply engineering design points optimized for different levels of performance, and there is nothing fundamentally more energy efficient in one ISA class or the other. The ISA being RISC or CISC seems irrelevant

    A Practical Approach to the Design of a Highly Efficient PSFB DC-DC Converter for Server Applications

    Get PDF
    The phase shift full bridge (PSFB) is a widely known isolated DC-DC converter topology commonly used in medium to high power applications, and one of the best candidates for the front-end DC-DC converter in server power supplies. Since the server power supplies consume an enormous amount of power, the most critical issue is to achieve high efficiency. Several organizations promoting electrical energy efficiency, like the 80 PLUS, keep introducing higher efficiency certifications with growing requirements extending also to light loads. The design of a high efficiency PSFB converter is a complex problem with many degrees of freedom which requires of a sufficiently accurate modeling of the losses and of e cient design criteria. In this work a losses model of the converter is proposed as well as design guidelines for the efficiency optimization of PSFB converter. The model and the criteria are tested with the redesign of an existing reference PSFB converter of 1400 W for server applications, with wide input voltage range, nominal 400 V input and 12 V output; achieving 95.85% of efficiency at 50% of the load. A new optimized prototype of PSFB was built with the same specifications, achieving a peak efficiency of 96.68% at 50% of the load.This research was financed by Infineon Technologies AG

    Load-Varying LINPACK: A Benchmark for Evaluating Energy Efficiency in High-End Computing

    Get PDF
    For decades, performance has driven the high-end computing (HEC) community. However, as highlighted in recent exascale studies that chart a path from petascale to exascale computing, power consumption is fast becoming the major design constraint in HEC. Consequently, the HEC community needs to address this issue in future petascale and exascale computing systems. Current scientific benchmarks, such as LINPACK and SPEChpc, only evaluate HEC systems when running at full throttle, i.e., 100% workload, resulting in a focus on performance and ignoring the issues of power and energy consumption. In contrast, efforts like SPECpower evaluate the energy efficiency of a compute server at varying workloads. This is analogous to evaluating the energy efficiency (i.e., fuel efficiency) of an automobile at varying speeds (e.g., miles per gallon highway versus city). SPECpower, however, only evaluates the energy efficiency of a single compute server rather than a HEC system; furthermore, it is based on SPEC's Java Business Benchmarks (SPECjbb) rather than a scientific benchmark. Given the absence of a load-varying scientific benchmark to evaluate the energy efficiency of HEC systems at different workloads, we propose the load-varying LINPACK (LV-LINPACK) benchmark. In this paper, we identify application parameters that affect performance and provide a methodology to vary the workload of LINPACK, thus enabling a more rigorous study of energy efficiency in supercomputers, or more generally, HEC

    A Robust Fault-Tolerant and Scalable Cluster-wide Deduplication for Shared-Nothing Storage Systems

    Full text link
    Deduplication has been largely employed in distributed storage systems to improve space efficiency. Traditional deduplication research ignores the design specifications of shared-nothing distributed storage systems such as no central metadata bottleneck, scalability, and storage rebalancing. Further, deduplication introduces transactional changes, which are prone to errors in the event of a system failure, resulting in inconsistencies in data and deduplication metadata. In this paper, we propose a robust, fault-tolerant and scalable cluster-wide deduplication that can eliminate duplicate copies across the cluster. We design a distributed deduplication metadata shard which guarantees performance scalability while preserving the design constraints of shared- nothing storage systems. The placement of chunks and deduplication metadata is made cluster-wide based on the content fingerprint of chunks. To ensure transactional consistency and garbage identification, we employ a flag-based asynchronous consistency mechanism. We implement the proposed deduplication on Ceph. The evaluation shows high disk-space savings with minimal performance degradation as well as high robustness in the event of sudden server failure.Comment: 6 Pages including reference

    High Availability Server Using Raspberry Pi 4 Cluster and Docker Swarm

    Get PDF
    In the Industrial 4.0 era, almost all activities and transactions are carried out via the internet, which basically uses web technology. For this reason, it is absolutely necessary to have a high-performance web server infrastructure capable of serving all the activities and transactions required by users without any constraints. This research aims to design a high-performance (high availability) web server infrastructure with low cost (low cost) and energy efficiency. low power) using Cluster Computing technology on the Raspberry Pi Single Board Computing and Docker Container technology. The cluster system is built using five raspberry Pi type 4B modules as cluster nodes, and the Web server system is built using docker container virtualization technology. Meanwhile, cluster management uses Docker Swarm technology. Performance testing (Quality of Service) of the cluster system is done by simulating a number of loads (requests) and measuring the response of the system based on the parameters of Throughput and Delay (latency). The test results show that the Raspberry Pi Cluster system using Docker Swarm can be used to build a High Availability Server system that is able to handle very high requests that reach Throughput = 161,812,298 requests / sec with an Error rate = 0%.In the Industrial 4.0 era, almost all activities and transactions are carried out via the internet, which basically uses web technology. For this reason, it is absolutely necessary to have a high-performance web server infrastructure capable of serving all the activities and transactions required by users without any constraints. This research aims to design a high-performance (high availability) web server infrastructure with low cost (low cost) and energy efficiency. low power) using Cluster Computing technology on the Raspberry Pi Single Board Computing and Docker Container technology. The cluster system is built using five raspberry Pi type 4B modules as cluster nodes, and the Web server system is built using docker container virtualization technology. Meanwhile, cluster management uses Docker Swarm technology. Performance testing (Quality of Service) of the cluster system is done by simulating a number of loads (requests) and measuring the response of the system based on the parameters of Throughput and Delay (latency). The test results show that the Raspberry Pi Cluster system using Docker Swarm can be used to build a High Availability Server system that is able to handle very high requests that reach Throughput = 161,812,298 requests / sec with an Error rate = 0%

    Optimal design methodology of zero-voltage-switching full-bridge pulse width modulated converter for server power supplies based on self-driven synchronous rectifier performance

    Get PDF
    In this paper, high-efficiency design methodology of a zero-voltage-switching full-bridge (ZVS-FB) pulse width modulation (PWM) converter for server-computer power supply is discussed based on self-driven synchronous rectifier (SR) performance. The design approach focuses on rectifier conduction loss on the secondary side because of high output current application. Various-number parallel-connected SRs are evaluated to reduce high conduction loss. For this approach, the reliability of gate control signals produced from a self-driver is analyzed in detail to determine whether the converter achieves high efficiency. A laboratory prototype that operates at 80 kHz and rated 1 kW/12 V is built for various-number parallel combination of SRs to verify the proposed theoretical analysis and evaluations. Measurement results show that the best efficiency of the converter is 95.16%. © 2016 KIPE

    ANALISA PENERAPAN SERVER DEPLOYMENT MENGGUNAKAN KUBERNETES UNTUK MENGHINDARI SINGLE OF FAILURE

    Get PDF
    Sistem komputasi terdistribusi menjadi salah satu kebutuhan dalam implementasi aplikasi berbasis server seperti database server dan web server agar tercapainya tingkat performansi tinggi. Masalah yang sering terjadi adalah kegagalan pada server sehingga perfrorma dari sebuah server terganggu, sehingga dibutuhkan suatu teknik deploy dapat digunakan untuk menyediakan sistem terdistribusi dengan performansi tinggi. Virtualisasi berbasis container menjadi pilihan untuk menjalankan sistem terdistribusi karena arsitektur yang ringan, kinerja yang cepat, dan efisiensi sumber daya. Salah satu virtualisasi berbasis container adalah memperkenalkan alat pengembangan sistem terdistribusi yang disebut Kubernetes, yang memungkinkan memanajemen deploy server untuk menyediakan sistem dengan availability yang tinggi. Metodologi pengembangan system yang digunakan adalah Network Development Life Cycle (NDLC). Dari 6 tahapan yang ada, hanya digunakan 3 tahapan yaitu Analysis, Design, dan Simulation Prototyping. Uji coba atau scenario pengujian yang dilakukan adalah Ftp Deploy dan Web Server Nginx sehingga dapat menjaga ketersediaan dan sistem mampu melakukan failover saat terjadi kegagalan pada serverDistributed computing systems are one of the requirements in implementing server-based applications such as database servers and web servers in order to achieve high levels of performance. The problem that often occurs is failure on the server so that the performance of a server is disrupted, so it takes a deployment technique that can be used to provide a distributed system with high performance. Container-based virtualization is the choice for running distributed systems because of its lightweight architecture, fast performance and resource efficiency. One of the container-based virtualisations is the introduction of a distributed systems development tool called Kubernetes, which allows managing server deployments to provide high availability systems. The system development methodology used is the Network Development Life Cycle (NDLC). Of the 6 stages, only 3 stages are used, namely Analysis, Design, and Simulation Prototyping. The test or test scenario carried out is Ftp Deploy and Nginx Web Server so that it can maintain availability and the system is able to failover when a server failure occur
    corecore