1,333 research outputs found

    A statistical equilibrium approach to the distribution of profit rates

    Get PDF
    Motivated by classical political economy we detail a probabilistic, “statis- tical equilibrium” approach to explaining why even in equilibrium, the equal- ization of profit rates leads to a non-degenerate distribution. Based on this approach we investigate the empirical content of the profit rate distribution for previously unexamined annual firm level data comprising over 24,000 publicly listed North American firms for the period 1962-2014. We find strong evidence for a structural organization and equalization of profit rates on a relatively short time scale both at the economy wide and one- and two-digit SIC industry levels into a Laplace or double exponential distribution. We show that the statistical equilibrium approach is consistent with economic theorizing about profit rates and discuss research questions emerging from this novel look at profit rate distributions. We also highlight the applicability of the underlying principle of maximum entropy for inference in a wide range of economic topics

    A comprehensive empirical power comparison of univariate goodness-of-fit tests for the Laplace distribution

    Full text link
    In this paper, we do a comprehensive survey of all univariate goodness-of-fit tests that we could find in the literature for the Laplace distribution, which amounts to a total of 45 different test statistics. After eliminating duplicates and considering parameters that yield the best power for each test, we obtain a total of 38 different test statistics. An empirical power comparison study of unmatched size is then conducted using Monte Carlo simulations, with 400 alternatives spanning over 20 families of distributions, for various sample sizes and confidence levels. A discussion of the results follows, where the best tests are selected for different classes of alternatives. A similar study was conducted for the normal distribution in Rom\~ao et al. (2010), although on a smaller scale. Our work improves significantly on Puig & Stephens (2000), which was previously the best-known reference of this kind for the Laplace distribution. All test statistics and alternatives considered here are integrated within the PoweR package for the R software.Comment: 37 pages, 1 figure, 20 table

    PoweR: A Reproducible Research Tool to Ease Monte Carlo Power Simulation Studies for Goodness-of-fit Tests in R

    Get PDF
    The PoweR package aims to help obtain or verify empirical power studies for goodnessof-fit tests for independent and identically distributed data. The current version of our package is only valid for simple null hypotheses or for pivotal test statistics for which the set of critical values does not depend on a particular choice of a null distribution (and on nuisance parameters) under the non-simple null case. We also assume that the distribution of the test statistic is continuous. As a reproducible research computational tool it can be viewed as helping to simply reproducing (or detecting errors in) simulation results already published in the literature. Using our package helps also in designing new simulation studies. The empirical levels and powers for many statistical test statistics under a wide variety of alternative distributions can be obtained quickly and accurately using a C/C++ and R environment. The parallel package can be used to parallelize computations when a multicore processor is available. The results can be displayed using LATEX tables or specialized graphs, which can be directly incorporated into a report. This article gives an overview of the main design aims and principles of our package, as well as strategies for adaptation and extension. Hands-on illustrations are presented to help new users in getting started

    Back to the Future: Economic Self-Organisation and Maximum Entropy Prediction

    Get PDF
    This paper shows that signal restoration methodology is appropriate for predicting the equilibrium state of certain economic systems. A formal justification for this is provided by proving the existence of finite improvement paths in object allocation problems under weak assumptions on preferences, linking any initial condition to a Nash equilibrium. Because a finite improvement path is made up of a sequence of systematic best-responses, backwards movement from the equilibrium back to the initial condition can be treated like the realisation of a noise process. This underpins the use of signal restoration to predict the equilibrium from the initial condition, and an illustration is provided through an application of maximum entropy signal restoration to the Schelling model of segregation

    Sparse Model Selection using Information Complexity

    Get PDF
    This dissertation studies and uses the application of information complexity to statistical model selection through three different projects. Specifically, we design statistical models that incorporate sparsity features to make the models more explanatory and computationally efficient. In the first project, we propose a Sparse Bridge Regression model for variable selection when the number of variables is much greater than the number of observations if model misspecification occurs. The model is demonstrated to have excellent explanatory power in high-dimensional data analysis through numerical simulations and real-world data analysis. The second project proposes a novel hybrid modeling method that utilizes a mixture of sparse principal component regression (MIX-SPCR) to segment high-dimensional time series data. Using the MIX-SPCR model, we empirically analyze the S\&P 500 index data (from 1999 to 2019) and identify two key change points. The third project investigates the use of nonlinear features in the Sparse Kernel Factor Analysis (SKFA) method to derive the information criterion. Using a variety of wide datasets, we demonstrate the benefits of SKFA in the nonlinear representation and classification of data. The results obtained show the flexibility and the utility of information complexity in such data modeling problems

    Le progiciel PoweR : un outil de recherche reproductible pour faciliter les calculs de puissance de certains tests d'hypothèses au moyen de simulations de Monte Carlo

    Full text link
    Notre progiciel PoweR vise à faciliter l'obtention ou la vérification des études empiriques de puissance pour les tests d'ajustement. En tant que tel, il peut être considéré comme un outil de calcul de recherche reproductible, car il devient très facile à reproduire (ou détecter les erreurs) des résultats de simulation déjà publiés dans la littérature. En utilisant notre progiciel, il devient facile de concevoir de nouvelles études de simulation. Les valeurs critiques et puissances de nombreuses statistiques de tests sous une grande variété de distributions alternatives sont obtenues très rapidement et avec précision en utilisant un C/C++ et R environnement. On peut même compter sur le progiciel snow de R pour le calcul parallèle, en utilisant un processeur multicœur. Les résultats peuvent être affichés en utilisant des tables latex ou des graphiques spécialisés, qui peuvent être incorporés directement dans vos publications. Ce document donne un aperçu des principaux objectifs et les principes de conception ainsi que les stratégies d'adaptation et d'extension.Package PoweR aims at facilitating the obtainment or verification of empirical power studies for goodness-of-fit tests. As such, it can be seen as a reproducible research computational tool because it becomes very easy to reproduce (or detect errors in) simulation results already published in the literature. Using our package, it becomes easy to design new simulation studies. The empirical levels and powers for many statistical test statistics under a wide variety of alternative distributions are obtained fastly and accurately using a C/C++ and R environment. One can even rely on package snow to parallelize their computations, using a multicore processor. The results can be displayed using LaTeX tables or specialized graphs, which can be directly incorporated into your publications. This paper gives an overview of the main design aims and principles as well as strategies for adaptation and extension. Hand-on illustrations are presented to get new users started easily

    Connecting Minds: On the Role of Metaknowledge in Knowledge Coordination

    Get PDF
    Knowledge coordination, that is, the process of locating, transferring, and integrating the specialized knowledge of multiple individuals, is a critical prerequisite for organizations to make fuller use of one of their most important resources: the knowledge of their employees. Yet, knowledge coordination is as challenging as it is important. This dissertation aims to further our understanding of how groups and larger collectives process information and integrate their knowledge and what factors influence the social interactions at the core of this process. The three empirical studies contained in this dissertation examine the role of individuals’ metaknowledge - the knowledge of who knows what - in knowledge coordination processes. Findings from the first two studies indicate that individuals who have an above-average level of metaknowledge can play a critical role in catalysing information processing and decision making in teams as well as in helping to integrate knowledge between organizational groups. The third study furthermore elucidates the role of formal rank in shaping informal organizational networks through which employees seek knowledge as well as metaknowledge. The findings presented in this dissertation contribute to research on group cognition, knowledge integration within and between groups, and intra-organizational networks. Most importantly, together these studies underscore the importance of taking into account differences in individuals’ metaknowledge in creating a better understanding of knowledge coordination in organizations

    Evaluating the Influence of Musical and Monetary Rewards on Decision Making through Computational Modelling

    Get PDF
    A central question in behavioural neuroscience is how different rewards modulate learning. While the role of monetary rewards is well-studied in decision-making research, the influence of abstract rewards like music remains poorly understood. This study investigated the dissociable effects of these two reward types on decision making. Forty participants completed two decision-making tasks, each characterised by probabilistic associations between stimuli and rewards, with probabilities changing over time to reflect environmental volatility. In each task, choices were reinforced either by monetary outcomes (win/lose) or by the endings of musical melodies (consonant/dissonant). We applied the Hierarchical Gaussian Filter, a validated hierarchical Bayesian framework, to model learning under these two conditions. Bayesian statistics provided evidence for similar learning patterns across both reward types, suggesting individuals’ similar adaptability. However, within the musical task, individual preferences for consonance over dissonance explained some aspects of learning. Specifically, correlation analyses indicated that participants more tolerant of dissonance behaved more stochastically in their belief-to-response mappings and were less likely to choose the response associated with the current prediction for a consonant ending, driven by higher volatility estimates. By contrast, participants averse to dissonance showed increased tonic volatility, leading to larger updates in reward tendency beliefs
    • …
    corecore