1,180 research outputs found

    Die EuropÀisierung der öffentlichen Aufgaben

    Get PDF
    Die zunehmende EuropĂ€isierung der öffentlichen Aufgaben ist einer der wichtigsten Trends im Wandel der StaatstĂ€tigkeit in der Bundesrepublik Deutschland und in anderen Mitgliedstaaten der EuropĂ€ischen Union. In diesem Essay werden die Stufen der EuropĂ€isierung der StaatstĂ€tigkeit nachgezeichnet, in WeiterfĂŒhrung von Lindberg/Scheingold (1970) und Schmitter (1996) quantifiziert und hinsichtlich ihrer Kosten und ihres Nutzen erörtert. Inhalt: Stufen der EuropĂ€isierung der öffentlichen Aufgaben Der EuropĂ€isierungsgrad der öffentlichen Aufgaben von 1950 bis zum Ende des 20. Jahrhunderts Vom Nutzen und von den Kosten der EuropĂ€isierung der öffentlichen Angelegenheiten Verzeichnis der zitierten Literatu

    Study of the neutron quantum states in the gravity field

    Full text link
    We have studied neutron quantum states in the potential well formed by the earth's gravitational field and a horizontal mirror. The estimated characteristic sizes of the neutron wave functions in the two lowest quantum states correspond to expectations with an experimental accuracy. A position-sensitive neutron detector with an extra-high spatial resolution of ~2 microns was developed and tested for this particular experiment, to be used to measure the spatial density distribution in a standing neutron wave above a mirror for a set of some of the lowest quantum states. The present experiment can be used to set an upper limit for an additional short-range fundamental force. We studied methodological uncertainties as well as the feasibility of improving further the accuracy of this experiment

    International capital mobility in an era of globalisation: adding a political dimension to the 'Feldstein–Horioka Puzzle'

    Get PDF
    The debate about the scope of feasible policy-making in an era of globalisation continues to be set within the context of an assumption that national capital markets are now perfectly integrated at the international level. However, the empirical evidence on international capital mobility contradicts such an assumption. As a consequence, a significant puzzle remains. Why is it, in a world in which the observed pattern of capital flows is indicative of a far from globalised reality, that public policy continues to be constructed in line with more extreme variants of the globalisation hypothesis? I attempt to solve this puzzle by arguing that ideas about global capital market integration have an independent causal impact on political outcomes which extends beyond that which can be attributed to the extent of their actual integration

    Software comparison for evaluating genomic copy number variation for Affymetrix 6.0 SNP array platform

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Copy number data are routinely being extracted from genome-wide association study chips using a variety of software. We empirically evaluated and compared four freely-available software packages designed for Affymetrix SNP chips to estimate copy number: Affymetrix Power Tools (APT), Aroma.Affymetrix, PennCNV and CRLMM. Our evaluation used 1,418 GENOA samples that were genotyped on the Affymetrix Genome-Wide Human SNP Array 6.0. We compared bias and variance in the locus-level copy number data, the concordance amongst regions of copy number gains/deletions and the false-positive rate amongst deleted segments.</p> <p>Results</p> <p>APT had median locus-level copy numbers closest to a value of two, whereas PennCNV and Aroma.Affymetrix had the smallest variability associated with the median copy number. Of those evaluated, only PennCNV provides copy number specific quality-control metrics and identified 136 poor CNV samples. Regions of copy number variation (CNV) were detected using the hidden Markov models provided within PennCNV and CRLMM/VanillaIce. PennCNV detected more CNVs than CRLMM/VanillaIce; the median number of CNVs detected per sample was 39 and 30, respectively. PennCNV detected most of the regions that CRLMM/VanillaIce did as well as additional CNV regions. The median concordance between PennCNV and CRLMM/VanillaIce was 47.9% for duplications and 51.5% for deletions. The estimated false-positive rate associated with deletions was similar for PennCNV and CRLMM/VanillaIce.</p> <p>Conclusions</p> <p>If the objective is to perform statistical tests on the locus-level copy number data, our empirical results suggest that PennCNV or Aroma.Affymetrix is optimal. If the objective is to perform statistical tests on the summarized segmented data then PennCNV would be preferred over CRLMM/VanillaIce. Specifically, PennCNV allows the analyst to estimate locus-level copy number, perform segmentation and evaluate CNV-specific quality-control metrics within a single software package. PennCNV has relatively small bias, small variability and detects more regions while maintaining a similar estimated false-positive rate as CRLMM/VanillaIce. More generally, we advocate that software developers need to provide guidance with respect to evaluating and choosing optimal settings in order to obtain optimal results for an individual dataset. Until such guidance exists, we recommend trying multiple algorithms, evaluating concordance/discordance and subsequently consider the union of regions for downstream association tests.</p

    Different paths to the modern state in Europe: the interaction between domestic political economy and interstate competition

    Get PDF
    Theoretical work on state formation and capacity has focused mostly on early modern Europe and on the experience of western European states during this period. While a number of European states monopolized domestic tax collection and achieved gains in state capacity during the early modern era, for others revenues stagnated or even declined, and these variations motivated alternative hypotheses for determinants of fiscal and state capacity. In this study we test the basic hypotheses in the existing literature making use of the large date set we have compiled for all of the leading states across the continent. We find strong empirical support for two prevailing threads in the literature, arguing respectively that interstate wars and changes in economic structure towards an urbanized economy had positive fiscal impact. Regarding the main point of contention in the theoretical literature, whether it was representative or authoritarian political regimes that facilitated the gains in fiscal capacity, we do not find conclusive evidence that one performed better than the other. Instead, the empirical evidence we have gathered lends supports to the hypothesis that when under pressure of war, the fiscal performance of representative regimes was better in the more urbanized-commercial economies and the fiscal performance of authoritarian regimes was better in rural-agrarian economie

    Branding the City: The Democratic Legitimacy of a New Mode of Governance

    Get PDF
    __Abstract__ Place branding has been used to influence ideas concerning communities and districts, especially in regeneration programmes. This article approaches branding as a new governance strategy for managing perceptions. Considering the popular criticism that branding is a form of spin that prevents the public from gaining a proper understanding of their government's policies, this article focuses on the democratic legitimacy of branding in urban governance. The branding of two urban communities in the Netherlands is examined empirically in terms of input legitimacy, throughput legitimacy and output legitimacy. The research shows how the democratic legitimacy of branding varies in the two cases. In one case, branding largely excluded citizens, whereas in the other case there was limited citizen participation. The article indicates that, although branding can potentially be a participatory process in which the feelings and emotions of citizens are included, this potential is not always fully realised in practice

    The politicisation of evaluation: constructing and contesting EU policy performance

    Get PDF
    Although systematic policy evaluation has been conducted for decades and has been growing strongly within the European Union (EU) institutions and in the member states, it remains largely underexplored in political science literatures. Extant work in political science and public policy typically focuses on elements such as agenda setting, policy shaping, decision making, or implementation rather than evaluation. Although individual pieces of research on evaluation in the EU have started to emerge, most often regarding policy “effectiveness” (one criterion among many in evaluation), a more structured approach is currently missing. This special issue aims to address this gap in political science by focusing on four key focal points: evaluation institutions (including rules and cultures), evaluation actors and interests (including competencies, power, roles and tasks), evaluation design (including research methods and theories, and their impact on policy design and legislation), and finally, evaluation purpose and use (including the relationships between discourse and scientific evidence, political attitudes and strategic use). The special issue considers how each of these elements contributes to an evolving governance system in the EU, where evaluation is playing an increasingly important role in decision making

    Pre-processing Agilent microarray data

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Pre-processing methods for two-sample long oligonucleotide arrays, specifically the Agilent technology, have not been extensively studied. The goal of this study is to quantify some of the sources of error that affect measurement of expression using Agilent arrays and to compare Agilent's Feature Extraction software with pre-processing methods that have become the standard for normalization of cDNA arrays. These include log transformation followed by loess normalization with or without background subtraction and often a between array scale normalization procedure. The larger goal is to define best study design and pre-processing practices for Agilent arrays, and we offer some suggestions.</p> <p>Results</p> <p>Simple loess normalization without background subtraction produced the lowest variability. However, without background subtraction, fold changes were biased towards zero, particularly at low intensities. ROC analysis of a spike-in experiment showed that differentially expressed genes are most reliably detected when background is not subtracted. Loess normalization and no background subtraction yielded an AUC of 99.7% compared with 88.8% for Agilent processed fold changes. All methods performed well when error was taken into account by t- or z-statistics, AUCs ≄ 99.8%. A substantial proportion of genes showed dye effects, 43% (99%<it>CI </it>: 39%, 47%). However, these effects were generally small regardless of the pre-processing method.</p> <p>Conclusion</p> <p>Simple loess normalization without background subtraction resulted in low variance fold changes that more reliably ranked gene expression than the other methods. While t-statistics and other measures that take variation into account, including Agilent's z-statistic, can also be used to reliably select differentially expressed genes, fold changes are a standard measure of differential expression for exploratory work, cross platform comparison, and biological interpretation and can not be entirely replaced. Although dye effects are small for most genes, many array features are affected. Therefore, an experimental design that incorporates dye swaps or a common reference could be valuable.</p
    • 

    corecore