1,215 research outputs found

    Currency Unions and Trade: A PPML Re-Assessment with High-Dimensional Fixed Effects

    Get PDF
    Recent work on the effects of currency unions (CUs) on trade stresses the importance of using many countries and years in order to obtain reliable estimates. However, for large samples, computational issues associated with the three-way (exporter-time, importer-time, and country-pair) fixed effects currently recommended in the gravity literature have heretofore limited the choice of estimator, leaving an important methodological gap. To address this gap, we introduce an iterative Poisson Pseudo-Maximum Likelihood (PPML) estimation procedure that facilitates the inclusion of these fixed effects for large data sets and also allows for correlated errors across countries and time. When applied to a comprehensive sample with more than 200 countries trading over 65 years, these innovations flip the conclusions of an otherwise rigorously-specified linear model. Most importantly, our estimates for both the overall CU effect and the Euro effect specifically are economically small and statistically insignificant. We also document that linear and PPML estimates of the Euro effect increasingly diverge as the sample size grows

    A geometric approach to visualization of variability in functional data

    Get PDF
    We propose a new method for the construction and visualization of boxplot-type displays for functional data. We use a recent functional data analysis framework, based on a representation of functions called square-root slope functions, to decompose observed variation in functional data into three main components: amplitude, phase, and vertical translation. We then construct separate displays for each component, using the geometry and metric of each representation space, based on a novel definition of the median, the two quartiles, and extreme observations. The outlyingness of functional data is a very complex concept. Thus, we propose to identify outliers based on any of the three main components after decomposition. We provide a variety of visualization tools for the proposed boxplot-type displays including surface plots. We evaluate the proposed method using extensive simulations and then focus our attention on three real data applications including exploratory data analysis of sea surface temperature functions, electrocardiogram functions and growth curves

    Generalized methods and solvers for noise removal from piecewise constant signals. I. Background theory

    Get PDF
    Removing noise from piecewise constant (PWC) signals is a challenging signal processing problem arising in many practical contexts. For example, in exploration geosciences, noisy drill hole records need to be separated into stratigraphic zones, and in biophysics, jumps between molecular dwell states have to be extracted from noisy fluorescence microscopy signals. Many PWC denoising methods exist, including total variation regularization, mean shift clustering, stepwise jump placement, running medians, convex clustering shrinkage and bilateral filtering; conventional linear signal processing methods are fundamentally unsuited. This paper (part I, the first of two) shows that most of these methods are associated with a special case of a generalized functional, minimized to achieve PWC denoising. The minimizer can be obtained by diverse solver algorithms, including stepwise jump placement, convex programming, finite differences, iterated running medians, least angle regression, regularization path following and coordinate descent. In the second paper, part II, we introduce novel PWC denoising methods, and comparisons between these methods performed on synthetic and real signals, showing that the new understanding of the problem gained in part I leads to new methods that have a useful role to play

    Finite-Size Scaling in the Energy-Entropy Plane for the 2D +- J Ising Spin Glass

    Full text link
    For L×LL \times L square lattices with L≀20L \le 20 the 2D Ising spin glass with +1 and -1 bonds is found to have a strong correlation between the energy and the entropy of its ground states. A fit to the data gives the result that each additional broken bond in the ground state of a particular sample of random bonds increases the ground state degeneracy by approximately a factor of 10/3. For x=0.5x = 0.5 (where xx is the fraction of negative bonds), over this range of LL, the characteristic entropy defined by the energy-entropy correlation scales with size as L1.78(2)L^{1.78(2)}. Anomalous scaling is not found for the characteristic energy, which essentially scales as L2L^2. When x=0.25x= 0.25, a crossover to L2L^2 scaling of the entropy is seen near L=12L = 12. The results found here suggest a natural mechanism for the unusual behavior of the low temperature specific heat of this model, and illustrate the dangers of extrapolating from small LL.Comment: 9 pages, two-column format; to appear in J. Statistical Physic

    Comparison of Tukey's T-Method and Scheffé's S-Method for Various Numbers of All Possible Differences of Averages Contrasts Under Violation of Assumptions

    Get PDF
    Empirical .05 and .01 rates of Type I error were compared for the Tukey and Scheffé multiple comparison techniques. The experimentwise error rate was defined over five sets of the all possible 25 differences of averages contrasts. The robustness of the Tukey and Scheffé statistics was not only related to the type of assumption violation, but also to the sets containing different numbers of contrasts. The Tukey method could be judged as robust a statistic as the Scheffé method.Yeshttps://us.sagepub.com/en-us/nam/manuscript-submission-guideline

    Towards Machine Wald

    Get PDF
    The past century has seen a steady increase in the need of estimating and predicting complex systems and making (possibly critical) decisions with limited information. Although computers have made possible the numerical evaluation of sophisticated statistical models, these models are still designed \emph{by humans} because there is currently no known recipe or algorithm for dividing the design of a statistical model into a sequence of arithmetic operations. Indeed enabling computers to \emph{think} as \emph{humans} have the ability to do when faced with uncertainty is challenging in several major ways: (1) Finding optimal statistical models remains to be formulated as a well posed problem when information on the system of interest is incomplete and comes in the form of a complex combination of sample data, partial knowledge of constitutive relations and a limited description of the distribution of input random variables. (2) The space of admissible scenarios along with the space of relevant information, assumptions, and/or beliefs, tend to be infinite dimensional, whereas calculus on a computer is necessarily discrete and finite. With this purpose, this paper explores the foundations of a rigorous framework for the scientific computation of optimal statistical estimators/models and reviews their connections with Decision Theory, Machine Learning, Bayesian Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty Quantification and Information Based Complexity.Comment: 37 page
    • 

    corecore