188,468 research outputs found

    Measuring the time-inconsistency of US monetary policy

    Get PDF
    JEL Classification: E52, E58monetary policy, time-inconsistency, US, US monetary policy

    Measuring the Time-Inconsistency of US Monetary Policy

    Get PDF
    This paper offers an alternative explanation for the behavior of postwar US inflation by measuring a novel source of monetary policy time- inconsistency due to Cukierman (2002). In the presence of asymmetric preferences, the monetary authorities end up generating a systematic inflation bias through the private sector expectations of a larger policy response in recessions than in booms. Reduced-form estimates of US monetary policy rules indicate that while the inflation target declines from the pre- to the post-Volcker regime, the average inflation bias, which is about one percent before 1979, tends to disappear over the last two decades. This result can be rationalized in terms of the preference on output stabilization, which is found to be large and asymmetric in the former but not in the latter period.asymmetric preferences, time-inconsistency, average inflation bias, US inflation

    Do high-energy neutrinos travel faster than photons in a discrete space-time?

    Get PDF
    The recent OPERA measurement of high-energy neutrino velocity, once independently verified, implies new physics in the neutrino sector. We revisit the theoretical inconsistency of the fundamental high-energy cutoff attributing to quantum gravity with the parity-violating gauge symmetry of local quantum field theory describing neutrinos. This inconsistency suggests high-dimension operators of neutrino interactions. Based on these studies, we try to view the OPERA result, high-energy neutrino oscillations and indicate to observe the restoration of parity conservation by measuring the asymmetry of high-energy neutrinos colliding with left- and right-handed polarized electrons.Comment: revised version to appear in Phys. Lett. B. 13 pages and 2 figure

    Measuring Inconsistency Methods for Evidentiary Value

    Get PDF
    Many inconsistency analysis methods may be used to detect altered records or statements. But for admission as evidence, the reliability of the method has to be determined and measured. For example, in China, for evidence to be admitted, it has to have 95% certainty of being correct,1 and that certainty must be shown to the court, while in the US, evidence is admitted if it is more probative than prejudicial (a \u3e50% standard).2 In either case, it is necessary to provide a measurement of some sort in order to pass muster under challenges from the other side. And in most cases, no such measurement has been undertaken. The question of how to undertake a scientific measurement to make such a determination, or at least to claim such a metric, is not well defined for digital forensics, but perhaps we can bring some light to the subject this issue

    Measuring inconsistency in research ethics committee review

    Get PDF
    Abstract Background The review of human participant research by Research Ethics Committees (RECs) or Institutional Review Boards (IRBs) is a complex multi-faceted process that cannot be reduced to an algorithm. However, this does not give RECs/ IRBs permission to be inconsistent in their specific requirements to researchers or in their final opinions. In England the Health Research Authority (HRA) coordinates 67 committees, and has adopted a consistency improvement plan including a process called “Shared Ethical Debate” (ShED) where multiple committees review the same project. Committee reviews are compared for consistency by analysing the resulting minutes. Methods We present a description of the ShED process. We report an analysis of minutes created by research ethics committees participating in two ShED exercises, and compare them to minutes produced in a published “mystery shopper” exercise. We propose a consistency score by defining top themes for each exercise, and calculating the ratio between top themes and total themes identified by each committee for each ShED exercise. Results Our analysis highlights qualitative differences between the ShED 19, ShED 20 and “mystery shopper” exercises. The quantitative measure of consistency showed only one committee across the three exercises with more than half its total themes as top themes (ratio of 0.6). The average consistency scores for the three exercises were 0.23 (ShED19), 0.35 (ShED20) and 0.32 (mystery shopper). There is a statistically significant difference between the ShED 19 exercise, and the ShED 20 and mystery shopper exercises. Conclusions ShED exercises are effective in identifying inconsistency between ethics committees and we describe a scoring method that could be used to quantify this. However, whilst a level of inconsistency is probably inevitable in research ethics committee reviews, studies must move beyond the ShED methodology to understand why inconsistency occurs, and what an acceptable level of inconsistency might be

    Adversarial Sets for Regularising Neural Link Predictors

    Get PDF
    In adversarial training, a set of models learn together by pursuing competing goals, usually defined on single data instances. However, in relational learning and other non-i.i.d domains, goals can also be defined over sets of instances. For example, a link predictor for the is-a relation needs to be consistent with the transitivity property: if is-a(x_1, x_2) and is-a(x_2, x_3) hold, is-a(x_1, x_3) needs to hold as well. Here we use such assumptions for deriving an inconsistency loss, measuring the degree to which the model violates the assumptions on an adversarially-generated set of examples. The training objective is defined as a minimax problem, where an adversary finds the most offending adversarial examples by maximising the inconsistency loss, and the model is trained by jointly minimising a supervised loss and the inconsistency loss on the adversarial examples. This yields the first method that can use function-free Horn clauses (as in Datalog) to regularise any neural link predictor, with complexity independent of the domain size. We show that for several link prediction models, the optimisation problem faced by the adversary has efficient closed-form solutions. Experiments on link prediction benchmarks indicate that given suitable prior knowledge, our method can significantly improve neural link predictors on all relevant metrics.Comment: Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence (UAI), 201

    Measuring and repairing inconsistency in probabilistic knowledge bases

    Get PDF
    AbstractIn this paper we present a family of measures aimed at determining the amount of inconsistency in probabilistic knowledge bases. Our approach to measuring inconsistency is graded in the sense that we consider minimal adjustments in the degrees of certainty (i.e., probabilities in this paper) of the statements necessary to make the knowledge base consistent. The computation of the family of measures we present here, in as much as it yields an adjustment in the probability of each statement that restores consistency, provides the modeler with possible repairs of the knowledge base. The case example that motivates our work and on which we test our approach is the knowledge base of CADIAG-2, a well-known medical expert system
    • …
    corecore