93 research outputs found

    Empowering a helper cluster through data-width aware instruction selection policies

    Get PDF
    Narrow values that can be represented by less number of bits than the full machine width occur very frequently in programs. On the other hand, clustering mechanisms enable cost- and performance-effective scaling of processor back-end features. Those attributes can be combined synergistically to design special clusters operating on narrow values (a.k.a. helper cluster), potentially providing performance benefits. We complement a 32-bit monolithic processor with a low-complexity 8-bit helper cluster. Then, in our main focus, we propose various ideas to select suitable instructions to execute in the data-width based clusters. We add data-width information as another instruction steering decision metric and introduce new data-width based selection algorithms which also consider dependency, inter-cluster communication and load imbalance. Utilizing those techniques, the performance of a wide range of workloads are substantially increased; helper cluster achieves an average speedup of 11% for a wide range of 412 apps. When focusing on integer applications, the speedup can be as high as 22% on averagePeer ReviewedPostprint (published version

    Exploiting Existing Copies in Register File for Soft Error Correction

    Get PDF
    Soft errors are an increasingly important problem in contemporary digital systems. Being the major data holding component in contemporary microprocessors, the register file has been an important part of the processor on which researchers offered many different schemes to protect against soft errors. In this paper we build on the previously proposed schemes and start with the observation that many register values already have a replica inside the storage space. We use this already available redundancy inside the register file in combination with a previously proposed value replication scheme for soft error detection and correction. We show that, by employing schemes that make use of the already available copies of the values inside the register file, it is possible to detect and correct 39.0 percent of the errors with an additional power consumption of 18.9 percent

    Fuse: A technique to anticipate failures due to degradation in ALUs

    Get PDF
    This paper proposes the fuse, a technique to anticipate failures due to degradation in any ALU (arithmetic logic unit), and particularly in an adder. The fuse consists of a replica of the weakest transistor in the adder and the circuitry required to measure its degradation. By mimicking the behavior of the replicated transistor the fuse anticipates the failure short before the first failure in the adder appears, and hence, data corruption and program crashes can be avoided. Our results show that the fuse anticipates the failure in more than 99.9% of the cases after 96.6% of the lifetime, even for pessimistic random within-die variations.Peer ReviewedPostprint (published version

    Refueling: Preventing wire degradation due to electromigration

    Get PDF
    Electromigration is a major source of wire and via failure. Refueling undoes EM for bidirectional wires and power/ground grids-some of a chip's most vulnerable wires. Refueling exploits EM's self-healing effect by balancing the amount of current flowing in both directions of a wire. It can significantly extend a wire's lifetime while reducing the chip area devoted to wires.Peer ReviewedPostprint (published version

    Impact of parameter variations on circuits and microarchitecture

    Get PDF
    Parameter variations, which are increasing along with advances in process technologies, affect both timing and power. Variability must be considered at both the circuit and microarchitectural design levels to keep pace with performance scaling and to keep power consumption within reasonable limits. This article presents an overview of the main sources of variability and surveys variation-tolerant circuit and microarchitectural approaches.Peer ReviewedPostprint (published version

    Can we trust undervolting in FPGA-based deep learning designs at harsh conditions?

    Get PDF
    As more Neural Networks on Field Programmable Gate Arrays (FPGAs) are used in a wider context, the importance of power efficiency increases. However, the focus on power should never compromise application accuracy. One technique to increase power efficiency is reducing the FPGAs' supply voltage ("undervolting"), which can cause accuracy problems. Therefore, careful design-time considerations are required for correct configuration without hindering the target accuracy. This fact becomes especially important for autonomous systems, edge-computing, or data-centers. This study reveals the impact of undervolting in harsh environmental conditions on the accuracy and power efficiency of the convolutional neural network benchmarks. We perform the comprehensive testing in a calibrated infrastructure at controlled temperatures (between -40C and 50C) and four distinct humidity levels (40%, 50%, 70%, 80%) for off-the-shelf FPGAs. We show the voltage guard-band shift with temperature is linear and propose new reliable undervolting designs providing a 65% increase in power efficiency (GOPS/W).Peer ReviewedPostprint (author's final draft

    The response of total testing process in clinical laboratory medicine to COVID-19 pandemic

    Get PDF
    Following a pandemic, laboratory medicine is vulnerable to laboratory errors due to the stressful and high workloads. We aimed to examine how laboratory errors may arise from factors, e.g., flexible working order, staff displacement, changes in the number of tests, and samples will reflect on the total test process (TTP) during the pandemic period. In 12 months, 6 months before and during the pandemic, laboratory errors were assessed via quality indicators (QIs) related to TTP phases. QIs were grouped as pre-, intra- and postanalytical. The results of QIs were expressed in defect percentages and sigma, evaluated with 3 levels of performance quality: 25th, 50th and 75th percentile values. When the pre- and during pandemic periods were compared, the sigma value of the samples not received was significantly lower in pre-pandemic group than during pandemic group (4.7σ vs. 5.4σ, P = 0.003). The sigma values of samples transported inappropriately and haemolysed samples were significantly higher in pre-pandemic period than during pandemic (5.0σ vs. 4.9σ, 4.3σ vs. 4.1σ; P = 0.046 and P = 0.044, respectively). Sigma value of tests with inappropriate IQC performances was lower during pandemic compared to the pre-pandemic period (3.3σ vs. 3.2σ, P = 0.081). Sigma value of the reports delivered outside the specified time was higher during pandemic than pre-pandemic period (3.0σ vs. 3.1σ, P = 0.030). In all TTP phases, some quality indicators improved while others regressed during the pandemic period. It was observed that preanalytical phase was affected more by the pandemic
    corecore