13 research outputs found

    Monsanto Lecture: The Complicated Business of State Supreme Court Elections: An Empirical Perspective

    Get PDF
    Proponents of judicial elections and related campaign activities emphasize existing First Amendment jurisprudence as well as similarities linking publicly elected state judges and other publicly-elected state officials. Opponents focus on judicial campaign contributions’ corrosive effects, including their potential to unduly influence judicial outcomes. Using a comprehensive data set of 2,345 business-related cases decided by state supreme courts across all fifty states between 2010–12, judicial election critics, including Professor Joanna Shepherd, emphasize the potential for bias and find that campaign contributions from business sources to state supreme court judicial candidates corresponded with candidates’ pro-business votes as justices. While Shepherd’s main findings generally replicate, additional (and alternative) analyses introduce new findings that raise complicating wrinkles for Shepherd’s strong normative claims. Findings from this study illustrate that efforts to influence judicial outcomes are not the exclusive domain of business interests. That is, judicial campaign contributions from non- (and anti-) business interests increase the probability of justices’ votes favoring non-business interests. As a result, critiques of judicial elections cannot properly rely exclusively on the influence of business interests. Moreover, that both business and non-business interests can successfully influence judicial outcomes through campaign contributions point in different (and possibly conflicting) normative directions. On the one hand, even if one agrees that the judicial branch qualitatively differs from the political and executive branches in terms of assessing campaign contributions’ proper role, that the potential to influence judicial outcomes is available to any interest group (willing to invest campaign contributions) complicates popular critiques of judicial elections. On the other hand, the same empirical findings also plausibly strengthen critiques of judicial elections, especially for those who view the judicial domain differently than other political domains

    Granular fuzzy models: a study in knowledge management in fuzzy modeling

    Get PDF
    AbstractIn system modeling, knowledge management comes vividly into the picture when dealing with a collection of individual models. These models being considered as sources of knowledge, are engaged in some collective pursuits of a collaborative development to establish modeling outcomes of global character. The result comes in the form of a so-called granular fuzzy model, which directly reflects upon and quantifies the diversity of the available sources of knowledge (local models) involved in knowledge management. In this study, several detailed algorithmic schemes are presented along with related computational aspects associated with Granular Computing. It is also shown how the construction of information granules completed through the use of the principle of justifiable granularity becomes advantageous in the realization of granular fuzzy models and a quantification of the quality (specificity) of the results of modeling. We focus on the design of granular fuzzy models considering that the locally available models are those fuzzy rule-based. It is shown that the model quantified in terms of two conflicting criteria, that is (a) a coverage criterion expressing to which extent the resulting information granules “cover” include data and (b) specificity criterion articulating how detailed (specific) the obtained information granules are. The overall quality of the granular model is also assessed by determining an area under curve (AUC) where the curve is formed in the coverage-specificity coordinates. Numeric results are discussed with intent of displaying the most essential features of the proposed methodology and algorithmic developments

    Essays in Firm Dynamics, Ownership and Aggregate Effects

    Get PDF
    Administrative registers maintained by statistical offices on vastly heterogeneous firms have much untapped potential to reveal details on sources of productivity of firms and economies alike. It has been proposed that firm-level shocks can go a long way in explaining aggregate fluctuations. Based on novel monthly frequency data, idiosyncratic shocks are able to explain a sizable share of the Finnish economic fluctuations, providing support to the granular hypothesis. The global financial crisis of 2007-2008 has challenged the field of economic forecasting, and nowcasting has become an active field. This thesis shows that the information content of firm-level sales and truck traffic can be used for nowcasting GDP figures, by using a specific mixture of machine learning algorithms. The agency problem lies at the heart of much of economic theory. Based on a unique dataset linking owners, CEOs and firms, and exploiting plausibly exogenous variations in the separation of ownership and control, agency costs seem to be an important determinant of firm productivity. Furthermore, the effect appear strongest in medium-sized firms. Enterprise group structures might have important implications on the voluminous literature on firm size, as large share of SME employment can be attributed to affiliates of large business groups. Within firm variation suggests that enterprise group affiliation has heterogeneous impacts depending on size, having strong positive impact on productivity of small firms, and negative impact on their growth. In terms of aggregate job creation, it is found that the independent small firms have contributed the most. The results in this thesis underline the benefits of paying attention to samples encompassing the total population of firms. Researchers should continue to explore the potential of rich administrative data sources at statistical offices and strive to strengthen the ties with data producers

    Simultaneous Learning of Fuzzy Sets

    Get PDF
    We extend a procedure based on support vector clustering and devoted to inferring the membership function of a fuzzy set to the case of a universe of discourse over which several fuzzy sets are defined. The extended approach learns simultaneously these sets without requiring as previous knowledge either their number or labels approximating membership values. This data-driven approach is completed via expert knowledge incorporation in the form of predefined shapes for the membership functions. The procedure is successfully tested on a benchmark

    Essays in Firm Dynamics, Ownership and Aggregate Effects

    Get PDF
    Administrative registers maintained by statistical offices on vastly heterogeneous firms have much untapped potential to reveal details on sources of productivity of firms and economies alike. It has been proposed that firm-level shocks can go a long way in explaining aggregate fluctuations. Based on novel monthly frequency data, idiosyncratic shocks are able to explain a sizable share of the Finnish economic fluctuations, providing support to the granular hypothesis. The global financial crisis of 2007-2008 has challenged the field of economic forecasting, and nowcasting has become an active field. This thesis shows that the information content of firm-level sales and truck traffic can be used for nowcasting GDP figures, by using a specific mixture of machine learning algorithms. The agency problem lies at the heart of much of economic theory. Based on a unique dataset linking owners, CEOs and firms, and exploiting plausibly exogenous variations in the separation of ownership and control, agency costs seem to be an important determinant of firm productivity. Furthermore, the effect appear strongest in medium-sized firms. Enterprise group structures might have important implications on the voluminous literature on firm size, as large share of SME employment can be attributed to affiliates of large business groups. Within firm variation suggests that enterprise group affiliation has heterogeneous impacts depending on size, having strong positive impact on productivity of small firms, and negative impact on their growth. In terms of aggregate job creation, it is found that the independent small firms have contributed the most. The results in this thesis underline the benefits of paying attention to samples encompassing the total population of firms. Researchers should continue to explore the potential of rich administrative data sources at statistical offices and strive to strengthen the ties with data producers

    Exploring clinical phenotypes of open-angle glaucoma and their significance in practice

    Full text link
    There are several enduring questions regarding the differentiation of clinical phenotypes of glaucoma which clinicians may derive clinical meaning directed towards patient’s management and prognostication. This thesis seeks to address the following issues relating to distinguishing clinical phenotypes of glaucoma: “Evaluating the impact of changing visual field test density on macular structure-function relationships to identify central-involving glaucoma phenotypes”; and “Identifying quantitative structural and functional clinical parameters that may distinguish between intraocular pressure (IOP) defined glaucoma phenotypes”; Two studies were undertaken to examine clinical phenotypes of glaucoma. The first study utilised systematic approach to assessing the impact of test point density in macular visual field (VF) testing on structure-function concordance for identifying centrally-involving glaucoma phenotypes. The second study used multivariate regression analysis and principal component analysis (PCA) to examine quantitative structural (using optical coherence tomography) and functional (VF) clinical data of newly-diagnosed glucoma patients to determine if there are clinically meaningful distinctions between IOP-defined phenotypes (i.e. low-tension vs high-tension glaucoma). Study 1) Using a systematic approach of test point addition and subtraction, we identified a critical number of test locations (8-14) in macular VF testing where binarised structure-function concordance is maximised, and discordance minimised. This methodology provides a framework for optimising macular VF test patterns for detection of centrally-involving glaucoma phenotypes. Study 2) Despite statistical significance in differences between low- and high-tension glaucoma, PCA applied to quantitative clinical structural and functional parameters returned no groups of clinical parameters that reliably distinguished between patients in IOP-defined glaucoma phenotypes. The present work provides a framework to identify phenotypic groups of glaucoma, the clinical significance of which may vary. We identified the minimum number of test points required to detect central-involving glaucoma in visual field testing. We also demonstrate that IOP-defined phenotypes are not clinically distinguishable at the point of diagnosis, suggesting that these phenotypes form part of a continuum of open-angle glaucoma. These findings have implications for disease staging and preferred treatment modality

    A burden of illness study for neuropathic pain in Europe

    Get PDF

    Studies on large dimensional time series models in empirical macroeconomics

    Get PDF
    In the last couple of decades, advances in information technology have led to a dramatic increase in the availability of economic data. This doctoral dissertation consists of a collection of articles aimed at the study of various econometric methodologies that allow for the use of large datasets in macroeconomic applications. Chapters 2 and 5 present large dimensional models to nowcast and forecast macroeconomic variables of interests, such as Finnish real output and the binary recession indicator. In particular, in Chapter 2 I use microeconomic data to create timely estimates of the aggregate output indicator of the Finnish economy. In Chapter 5, I use a large dimensional probit model to compute short and long-term forecasts of the United States recession indicator. Chapters 3 and 4 consist of studies related to Finnish enteprises. Specifically, in Chapter 3 I examine the employment behavior of small and large Finnish firms and analyze how their job creation and cyclicality has differed over the last 16 years. In Chapter 4, I analyze the effect of shocks to large Finnish corporations onto the aggregate business cycle, finding that the shocks to a small number of companies are able to explain a substantial share of the fluctuations in aggregate output.Ei saatavill

    Applications of deep learning and statistical methods for a systems understanding of convergence in immune repertoires

    Get PDF
    Deep learning and adaptive immune receptor repertoire (AIRR) biology are two emerging fields that are highly compatible due to the inherent complexity of the immune systems and the enormous amount of data produced in AIRR-sequencing research combined with the revolutionary success of deep learning technology to make predictions about high dimensional complex systems/data. We took steps towards the effective utilisation of and statistical methods in repertoire immunology by undertaking one of the central problems in immunology, i.e. immune repertoire convergence. First, we took part in developing and testing an array of summary statistics for immune repertoires to gain insights into the descriptive features of immune repertoires and grant us the ability to compare repertoires. We collected the deepest sequencing datasets to address whether the population-wide genomic convergence of immunoglobulin molecules can be predicted. The immunoglobulin molecules were labelled with their “degree of commonality” (DoC), defined as the number of times an immunoglobulin V3J clonotype is observed in a population, where a V3J clonotype is defined by its V and J genes and CDR3 sequence. We developed various bespoke data analytics methods, informed at different stages by the summary statistics we had previously implemented. Importantly, we demonstrated that machine learning (ML) predictions for immune repertoires could lead to misleadingly positive outcomes if data is processed inappropriately due to “data leakage” and addressed this issue by implementing a leak-free data processing pipeline. Here, data leakage refers to immunoglobulin sequences with the same clonotype definition spreading across the train-validation-test splits in the ML task. We designed a multitude of bespoke deep neural network architectures, implemented under various modelling approaches, including a customised squeeze-and-excitation temporal convolutional neural network (SE-TCN) and a Transformer model. Unsurprisingly, given the continuous spectrum of DoCs, regression modelling proved to be the best approach, both in the granularity of predictions and error distribution. Finally, we report that our SE-TCN architecture under the regression modelling framework achieves state-of-the-art performance by achieving an overall mean absolute error (MAE) score of 0.083 and per-DoC error distributions with reasonably small standard deviations
    corecore