3,245 research outputs found

    The Effect of Biased Communications On Both Trusting and Suspicious Voters

    Full text link
    In recent studies of political decision-making, apparently anomalous behavior has been observed on the part of voters, in which negative information about a candidate strengthens, rather than weakens, a prior positive opinion about the candidate. This behavior appears to run counter to rational models of decision making, and it is sometimes interpreted as evidence of non-rational "motivated reasoning". We consider scenarios in which this effect arises in a model of rational decision making which includes the possibility of deceptive information. In particular, we will consider a model in which there are two classes of voters, which we will call trusting voters and suspicious voters, and two types of information sources, which we will call unbiased sources and biased sources. In our model, new data about a candidate can be efficiently incorporated by a trusting voter, and anomalous updates are impossible; however, anomalous updates can be made by suspicious voters, if the information source mistakenly plans for an audience of trusting voters, and if the partisan goals of the information source are known by the suspicious voter to be "opposite" to his own. Our model is based on a formalism introduced by the artificial intelligence community called "multi-agent influence diagrams", which generalize Bayesian networks to settings involving multiple agents with distinct goals

    Generic Machine Learning Inference on Heterogenous Treatment Effects in Randomized Experiments

    Full text link
    We propose strategies to estimate and make inference on key features of heterogeneous effects in randomized experiments. These key features include best linear predictors of the effects using machine learning proxies, average effects sorted by impact groups, and average characteristics of most and least impacted units. The approach is valid in high dimensional settings, where the effects are proxied by machine learning methods. We post-process these proxies into the estimates of the key features. Our approach is generic, it can be used in conjunction with penalized methods, deep and shallow neural networks, canonical and new random forests, boosted trees, and ensemble methods. It does not rely on strong assumptions. In particular, we don't require conditions for consistency of the machine learning methods. Estimation and inference relies on repeated data splitting to avoid overfitting and achieve validity. For inference, we take medians of p-values and medians of confidence intervals, resulting from many different data splits, and then adjust their nominal level to guarantee uniform validity. This variational inference method is shown to be uniformly valid and quantifies the uncertainty coming from both parameter estimation and data splitting. We illustrate the use of the approach with two randomized experiments in development on the effects of microcredit and nudges to stimulate immunization demand.Comment: 53 pages, 6 figures, 15 table

    Resolving the Raven Paradox: Simple Random Sampling, Stratified Random Sampling, and Inference to the Best Explanation

    Get PDF
    Simple random sampling resolutions of the raven paradox relevantly diverge from scientific practice. We develop a stratified random sampling model, yielding a better fit and apparently rehabilitating simple random sampling as a legitimate idealization. However, neither accommodates a second concern, the objection from potential bias. We develop a third model that crucially invokes causal considerations, yielding a novel resolution that handles both concerns. This approach resembles Inference to the Best Explanation (IBE) and relates the generalization’s confirmation to confirmation of an associated law. We give it an objective Bayesian formalization and discuss the compatibility of Bayesianism and IBE

    Statistical Inferences Using Large Estimated Covariances for Panel Data and Factor Models

    Full text link
    While most of the convergence results in the literature on high dimensional covariance matrix are concerned about the accuracy of estimating the covariance matrix (and precision matrix), relatively less is known about the effect of estimating large covariances on statistical inferences. We study two important models: factor analysis and panel data model with interactive effects, and focus on the statistical inference and estimation efficiency of structural parameters based on large covariance estimators. For efficient estimation, both models call for a weighted principle components (WPC), which relies on a high dimensional weight matrix. This paper derives an efficient and feasible WPC using the covariance matrix estimator of Fan et al. (2013). However, we demonstrate that existing results on large covariance estimation based on absolute convergence are not suitable for statistical inferences of the structural parameters. What is needed is some weighted consistency and the associated rate of convergence, which are obtained in this paper. Finally, the proposed method is applied to the US divorce rate data. We find that the efficient WPC identifies the significant effects of divorce-law reforms on the divorce rate, and it provides more accurate estimation and tighter confidence intervals than existing methods

    Stable transports between stationary random measures

    Full text link
    We give an algorithm to construct a translation-invariant transport kernel between ergodic stationary random measures Φ\Phi and Ψ\Psi on Rd\mathbb R^d, given that they have equal intensities. As a result, this yields a construction of a shift-coupling of an ergodic stationary random measure and its Palm version. This algorithm constructs the transport kernel in a deterministic manner given realizations φ\varphi and ψ\psi of the measures. The (non-constructive) existence of such a transport kernel was proved in [8]. Our algorithm is a generalization of the work of [3], in which a construction is provided for the Lebesgue measure and an ergodic simple point process. In the general case, we limit ourselves to what we call constrained densities and transport kernels. We give a definition of stability of constrained densities and introduce our construction algorithm inspired by the Gale-Shapley stable marriage algorithm. For stable constrained densities, we study existence, uniqueness, monotonicity w.r.t. the measures and boundedness.Comment: In the second version, we change the way of presentation of the main results in Section 4. The main results and their proofs are not changed significantly. We add Section 3 and Subsection 4.6. 25 pages and 2 figure
    corecore