378,965 research outputs found

    Entanglement, joint measurement, and state reduction

    Get PDF
    Entanglement is perhaps the most important new feature of the quantum world. It is expressed in quantum theory by the joint measurement formula. We prove the formula for self-adjoint observables from a plausible assumption, which for spacelike separated measurements is an expression of relativistic causality. State reduction is simply a way to express the JMF after one measurement has been made, and its result known.Comment: New material. Reformatted for journal submissio

    Interpreting Permanent Shocks to Output When Aggregate Demand May Not be Neutral in the Long Run

    Get PDF
    This paper studies Blanchard and Quah’s (1989) statistical model of permanent and transitory shocks to output using a set of arguably more plausible structural assumptions. Economists typically motivate this statistical model by assuming aggregate demand shocks have no long-run effect on the level of output. Many economic theories are, however, inconsistent with that assumption. We reinterpret this statistical model assuming a positive shock to aggregate supply lowers the price level and in the long run raises output while a positive shock to aggregate demand raises the price level. No assumption is made about the long-run output effect of aggregate demand. Based on these assumptions, we show that a puzzling finding from the empirical literature implies that a positive (negative) aggregate demand shock had a long-run positive (negative) effect on the level of output in a number of pre-World War I economies.vector autoregression, identification restrictions, moving average representations, aggregate demand and supply theory, permanent and transitory shock decomposition

    The political economy of growth and distribution: A theoretical critique

    Get PDF
    This paper reconsiders the political economy approach to growth and distribution according to which (1) rising inequality induces more government redistribution; (2) more government redistribution is financed by higher distortionary taxation; and (3) higher distortionary taxes reduce economic growth. We present a variety of theoretical arguments demonstrating that all three propositions may be overturned by simply changing an assumption in a plausible way or adding a relevant real-world element to the basal models. The political economy models of growth and distribution, as well as the specific inequality-growth transmission channel they propose, must therefore be assessed as overly simplistic and inadequate with respect to the issues studied. --Political Economy,Redistribution,Inequality,Economic growth

    Achieving mutual understanding in intercultural project partnerships : co-operation, self-orientation, and fragility

    Get PDF
    Communication depends on cooperation in at least the following way: In order to be successful, communicative behavior needs to be adjusted to the general world knowledge, abilities, and interests of the hearer, and the hearer's success in figuring out the message and responding to it needs to be informed by assumptions about the communicator's informative intentions, personal goals, and communicative abilities. In other words, interlocutors cooperate by coordinating their actions in order to fulfill their communicative intentions. This minimal assumption about cooperativeness must in one way or another be built into the foundations of any plausible inferential model of human communication. However, the communication process is also influenced to a greater or lesser extent, whether intentionally and consciously or unintentionally and unconsciously, by the participants' orientation toward, or preoccupation with, their own concerns, so their behavior may easily fall short of being as cooperative as is required for achieving successful communication

    Calibrating Knowledge Graphs

    Get PDF
    A knowledge graph model represents a given knowledge graph as a number of vectors. These models are evaluated for several tasks, and one of them is link prediction, which consists of predicting whether new edges are plausible when the model is provided with a partial edge. Calibration is a postprocessing technique that aims to align the predictions of a model with respect to a ground truth. The idea is to make a model more reliable by reducing its confidence for incorrect predictions (overconfidence), and increasing the confidence for correct predictions that are closer to the negative threshold (underconfidence). Calibration for knowledge graph models have been previously studied for the task of triple classification, which is different than link prediction, and assuming closed-world, that is, knowledge that is missing from the graph at hand is incorrect. However, knowledge graphs operate under the open-world assumption such that it is unknown whether missing knowledge is correct or incorrect. In this thesis, we propose open-world calibration of knowledge graph models for link prediction. We rely on strategies to synthetically generate negatives that are expected to have different levels of semantic plausibility. Calibration thus consists of aligning the predictions of the model with these different semantic levels. Nonsensical negatives should be farther away from a positive than semantically plausible negatives. We analyze several scenarios in which calibration based on the sigmoid function can lead to incorrect results when considering distance-based models. We also propose the Jensen-Shannon distance to measure the divergence of the predictions before and after calibration. Our experiments exploit several pre-trained models of nine algorithms over seven datasets. Our results show that many of these pre-trained models are properly calibrated without intervention under the closed-world assumption, but it is not the case for the open-world assumption. Furthermore, Brier scores (the mean squared error before and after calibration) using the closed-world assumption are generally lower and the divergence is higher when using open-world calibration. From these results, we gather that open-world calibration is a harder task than closed-world calibration. Finally, analyzing different measurements related to link prediction accuracy, we propose a combined loss function for calibration that maintains the accuracy of the model

    A four-step strategy for handling missing outcome data in randomised trials affected by a pandemic.

    Get PDF
    BACKGROUND: The coronavirus pandemic (Covid-19) presents a variety of challenges for ongoing clinical trials, including an inevitably higher rate of missing outcome data, with new and non-standard reasons for missingness. International drug trial guidelines recommend trialists review plans for handling missing data in the conduct and statistical analysis, but clear recommendations are lacking. METHODS: We present a four-step strategy for handling missing outcome data in the analysis of randomised trials that are ongoing during a pandemic. We consider handling missing data arising due to (i) participant infection, (ii) treatment disruptions and (iii) loss to follow-up. We consider both settings where treatment effects for a 'pandemic-free world' and 'world including a pandemic' are of interest. RESULTS: In any trial, investigators should; (1) Clarify the treatment estimand of interest with respect to the occurrence of the pandemic; (2) Establish what data are missing for the chosen estimand; (3) Perform primary analysis under the most plausible missing data assumptions followed by; (4) Sensitivity analysis under alternative plausible assumptions. To obtain an estimate of the treatment effect in a 'pandemic-free world', participant data that are clinically affected by the pandemic (directly due to infection or indirectly via treatment disruptions) are not relevant and can be set to missing. For primary analysis, a missing-at-random assumption that conditions on all observed data that are expected to be associated with both the outcome and missingness may be most plausible. For the treatment effect in the 'world including a pandemic', all participant data is relevant and should be included in the analysis. For primary analysis, a missing-at-random assumption - potentially incorporating a pandemic time-period indicator and participant infection status - or a missing-not-at-random assumption with a poorer response may be most relevant, depending on the setting. In all scenarios, sensitivity analysis under credible missing-not-at-random assumptions should be used to evaluate the robustness of results. We highlight controlled multiple imputation as an accessible tool for conducting sensitivity analyses. CONCLUSIONS: Missing data problems will be exacerbated for trials active during the Covid-19 pandemic. This four-step strategy will facilitate clear thinking about the appropriate analysis for relevant questions of interest

    The political economy of growth and distribution: a theoretical critique

    Full text link
    "This paper reconsiders the political economy approach to growth and distribution according to which (1) rising inequality induces more government redistribution; (2) more government redistribution is financed by higher distortionary taxation; and (3) higher distortionary taxes reduce economic growth. We present a variety of theoretical arguments demonstrating that all three propositions may be overturned by simply changing an assumption in a plausible way or adding a relevant real-world element to the basal models. The political economy models of growth and distribution, as well as the specific inequality-growth transmission channel they propose, must therefore be assessed as overly simplistic and inadequate with respect to the issues studied." (author's abstract

    Optimal Monitoring for project-based Emissions Trading Systems under incomplete Enforcement

    Get PDF
    Project-based Emissions Trading Schemes, like the Clean Development Mechanism, are particularly prone to problems of asymmetric information between the project parties and the regulator. Given the specificities of these schemes, the regulator’s optimal monitoring strategy significantly differs from the one to be applied for capand- trade schemes or environmental taxes. In this paper, we extend the general framework on incomplete enforcement of policy instruments to reflect these specificities. The main focus of the analysis is to determine the regulator’s optimal spot-check frequency under the plausible assumption that the submitted projects vary with respect to their verifiability. We find that, given a limited monitoring budget, the optimal monitoring strategy is discontinuous, featuring a jump within the set of projects with lower verifiability. In this region, actual abatement is low and can fall to zero. For these cases, the sign of the slope of the strategy function depends on the actual relationship of the abatement cost and the penalty function. We conclude that, in a real-world context, project admission should ultimately be based on the criterion of verifiability.environmental regulation, emissions trading systems, audits and compliance
    • 

    corecore