167 research outputs found

    A note on complexity=anything conjecture in AdS Gauss-Bonnet gravity

    Full text link
    It has been suggested that quantum complexity is dual to the volume of the extremal surface, the action of the Wheeler-DeWitt patch, and the spacetime volume of the patch. Recently, it is proposed that a generalized volume-complexity observable can be formulated as an equivalently good candidate for the dual holographic complexity. This proposal is abbreviated as ``complexity=anything". This proposal offers greater flexibility in selecting extremal surfaces and evaluating physical quantities (e.g., volume or action) on these surfaces. In this study, we explore the complexity=anything proposal for Gauss-Bonnet black holes in asymptotic anti-de Sitter space in various dimensions. We demonstrate that this proposal guarantees the linear growth of the generalized volume at late time regardless of the coupling parameters for four-dimensional Gauss-Bonnet gravity. However, this universality is not upheld for higher dimensions. Besides, discontinuous deformations of the extremal surfaces can occur when multiple peaks exist in the effective potential, which is a reminiscence of a phase transition. In addition, we provide the constraints on the coupling parameter of the five dimensional models to quantify the generalized volume as a viable candidate for holographic complexity.Comment: 18 pages, 7 figure

    Intuitionistic Trapezoidal Fuzzy Multiple Criteria Group Decision Making Method Based on Binary Relation

    Get PDF
    The aim of this paper is to develop a methodology for intuitionistic trapezoidal fuzzy multiple criteria group decision making problems based on binary relation. Firstly, the similarity measure between two vectors based on binary relation is defined, which can be utilized to aggregate preference information. Some desirable properties of the similarity measure based on fuzzy binary relation are also studied. Then, a methodology for fuzzy multiple criteria group decision making is proposed, in which the criteria values are in the terms of intuitionistic trapezoidal fuzzy numbers (ITFNs). Simple and exact formulas are also proposed to determine the vector of the aggregation and group set. According to the weighted expected values of group set, it is easy to rank the alternatives and select the best one. Finally, we apply the proposed method and the Cosine similarity measure method to a numerical example; the numerical results show that our method is effective and practical

    Exploiting Inherent Program Redundancy for Fault Tolerance

    Get PDF
    Technology scaling has led to growing concerns about reliability in microprocessors. Currently, fault tolerance studies rely on creating explicitly redundant execution for fault detection or recovery, which usually involves expensive cost on performance, power, or hardware, etc. In our study, we find exploiting program's inherent redundancy can better trade off between reliability, performance, and hardware cost. This work proposes two approaches to enhance program reliability. The first approach investigates the additional fault resilience at the application level. We explore program correctness definition that views correctness from the application's standpoint rather than the architecture's standpoint. Under application-level correctness, multiple numerical outputs can be deemed as correct as long as they are acceptable to users. Thus faults that cause program to produce such outputs can also be tolerated. We find programs which produce inexact and/or approximate outputs can be very resilient at the application level. We call such programs soft computations, and find that they are common in multimedia workloads, as well as artificial intelligence (AI) workloads. Programs that only compute exact numerical outputs offer less error resilience at the application level. However, all programs that we have studied exhibit some enhanced fault resilience at the application level, including those that are traditionally considered as exact computations-e.g., SPECInt CPU2000. We conduct fault injection experiments and evaluate the additional fault tolerance at the application level compared to the traditional architectural level. We also exploit the relaxed requirements for numerical integrity of application-level correctness to reduce checkpoint cost: our lightweight recovery mechanism checkpoints a minimal set of program state including program counter, architectural register file, and stack; our soft-checkpointing technique identifies computations that are resilient to errors and excludes their output state from checkpoint. Both techniques incur much smaller runtime overhead than traditional checkpointing, but can successfully recover either all or a major part of program crashes in soft computations. The second approach we take studies value predictability for reducing fault rate. Value prediction is considered as additional execution, and its results are compared with corresponding computational outputs. Any mismatch between them is accounted as symptom of potential faults and incurs restoration process. To reduce misprediction rate caused by limitations of predictor itself, we characterize fault vulnerability at the instruction level and only apply value prediction to instructions that are highly susceptible to faults. We also vary threshold of confidence estimation according to instruction's vulnerability-instructions with high vulnerability are assigned with low confidence threshold, while instructions with low vulnerability are assigned with high confidence threshold. Our experimental results show benefit from such selective prediction and adaptive confidence threshold on balance between reliability and performance
    • …
    corecore