312,357 research outputs found

    Automated Discovery in Econometrics

    Get PDF
    Our subject is the notion of automated discovery in econometrics. Advances in computer power, electronic communication, and data collection processes have all changed the way econometrics is conducted. These advances have helped to elevate the status of empirical research within the economics profession in recent years and they now open up new possibilities for empirical econometric practice. Of particular significance is the ability to build econometric models in an automated way according to an algorithm of decision rules that allow for (what we call here) heteroskedastic and autocorrelation robust (HAR) inference. Computerized search algorithms may be implemented to seek out suitable models, thousands of regressions and model evaluations may be performed in seconds, statistical inference may be automated according to the properties of the data, and policy decisions can be made and adjusted in real time with the arrival of new data. We discuss some aspects and implications of these exciting, emergent trends in econometrics.Automation, discovery, HAC estimation, HAR inference, model building, online econometrics, policy analysis, prediction, trends

    Learning Policy Levers: Toward Automated Policy Analysis Using Judicial Corpora

    Get PDF
    To build inputs for end-to-end machine learning estimates of the causal impacts of law, we consider the problem of automatically classifying cases by their policy impact. We propose and implement a semi-supervised multi-class learning model, with the training set being a hand-coded dataset of thousands of cases in over 20 politically salient policy topics. Using opinion text features as a set of predictors, our model can classify labeled cases by topic correctly 91% of the time. We then take the model to the broader set of unlabeled cases and show that it can identify new groups of cases by shared policy impact

    NETQOS policy management architecture for flexible QOS provisioning in Future Internet

    Get PDF
    This paper is focussed on the NETQOS architecture for automated QoS policy provisioning, which can be used in Future Internet scenarios by the different actors (i.e. network operators, service providers, and users) for flexible QoS configuration over combinations of mobile, fixed, sensor and broadcast networks. The NETQOS policy management architecture opens the possibility to specify QoS policies on a "business" level using ontology descriptions and policy management interfaces, which are specific to the actors. The business level policy specifications are translated by the NETQOS system into intermediate and operational QoS policies for automated QoS configuration at the managed heterogeneous network and transport entities. NETQOS allows QoS policy specification and dependency analysis considering Service Level Agreements (SLAs) between the actors, as well as automated policy provisioning and adaptation. The interaction of the NETQOS components is based on a common po licy repository. The particular focus of the paper is aimed to discuss ontology and actor oriented QoS policy specification and configuration for heterogeneous networks, as well as NETQOS QoS policy management interfaces at business level and automated translation of business QoS policies to intermediate and operational policy level

    A Code Policy Guaranteeing Fully Automated Path Analysis

    Get PDF
    Calculating the worst-case execution time (WCET) of real-time tasks is still a tedious job. Programmers are required to provide additional information on the program flow, analyzing subtle, context dependent loop bounds manually. In this paper, we propose to restrict written and generated code to the class of programs with input-data independent loop counters. The proposed policy builds on the ideas of single-path code, but only requires partial input-data independence. It is always possible to find precise loop bounds for these programs, using an efficient variant of abstract execution. The systematic construction of tasks following the policy is facilitated by embedding knowledge on input-data dependence in function interfaces and types. Several algorithms and benchmarks are analyzed to show that this restriction is indeed a good candidate for removing the need for manual annotations

    Measuring Fiscal Sustainability

    Get PDF
    We propose an index of the fiscal stance that is convenient for practical use. It is based on a finite time horizon, not on an infinite time horizon like most tests. As it employs VAR analysis it is simple to compute and easily automated. We also show how it is possible to analyse a change of policy within a VAR framework. We use this methodology to examine the effect on fiscal sustainability of a change in policy. We then conduct an empirical examination of the fiscal stances of the US, the UK and Germany over the last 25 or more years, and we carry out a counter-factual analysis of the likely consequences for fiscal sustainability of using a Taylor rule to set monetary policy over this period. Among our findings are that the recent fiscal stances of all three countries are not sustainable, and that using a Taylor rule in the past would have improved the fiscal stances of the US and UK, but not that of Germany.Budget deficits; government debt; fiscal sustainability; VAR analysis; economic policy.

    A time series causal model

    Get PDF
    Cause-effect relations are central in economic analysis. Uncovering empirical cause-effect relations is one of the main research activities of empirical economics. In this paper we develop a time series casual model to explore casual relations among economic time series. The time series causal model is grounded on the theory of inferred causation that is a probabilistic and graph-theoretic approach to causality featured with automated learning algorithms. Applying our model we are able to infer cause-effect relations that are implied by the observed time series data. The empirically inferred causal relations can then be used to test economic theoretical hypotheses, to provide evidence for formulation of theoretical hypotheses, and to carry out policy analysis. Time series causal models are closely related to the popular vector autoregressive (VAR) models in time series analysis. They can be viewed as restricted structural VAR models identified by the inferred causal relations.Inferred Causation, Automated Learning, VAR, Granger Causality, Wage-Price Spiral

    Semi-Automated Analysis of Large Privacy Policy Corpora

    Get PDF
    Regulators, policy makers, and consumers are interested in proactively identifying services with acceptable or compliant data use policies, privacy policies, and terms of service. Academic requirements engineering researchers and legal scholars have developed qualitative, manual approaches to conducting requirements analysis of policy documents to identify concerns and compare services against preferences or standards. In this research, we develop and present an approach to conducting large-scale, qualitative, prospective analyses of policy documents with respect to the wide-variety of normative concerns found in policy documents. Our approach uses techniques from natural language processing, including topic modeling and summarization. We evaluate our approach in an exploratory case study that attempts to replicate a manual legal analysis of roughly 200 privacy policies from seven domains in a semi-automated fashion at a larger scale. Our findings suggest that this approach is promising for some concerns

    Facing Forward: Policy for Automated Facial Expression Analysis

    Get PDF
    The human face is a powerful tool for nonverbal communication. Technological advances have enabled widespread and low-cost deployment of video capture and facial recognition systems, opening the door for automated facial expression analysis (AFEA). This paper summarizes current challenges to the reliability of AFEA systems and challenges that could arise as a result of reliable AFEA systems. The potential benefits of AFEA are considerable, but developers, prospective users, and policy makers should proceed with caution

    Supervised scaling of semi-structured interview transcripts to characterize the ideology of a social policy reform

    Get PDF
    Automated content analysis methods treat ā€˜ā€˜text as data'' and can therefore analyze efficiently large qualitative databases. Yet, despite their potential, these methods are rarely used to supplement qualitative analysis in small-N designs. We address this gap by replicating the qualitative findings of a case study of a social policy reform using automated content analysis. To characterize the ideology of this reform, we reanalyze the same interview data with Wordscores, using academic publications as reference texts. As expected, the reform's ideology is center/center-right, a result that we validate using content, convergent and discriminant strategies. The validation evidence suggests not only that the ideological positioning of the policy reform is credible, but also that Wordscores' scope of application is greater than expecte

    Beyond Algorithms: Toward a Normative Theory of Automated Regulation

    Get PDF
    The proliferation of artificial intelligence in our daily lives has spawned a burgeoning literature on the dawn of dehumanized, algorithmic governance. Remarkably, the scholarly discourse overwhelmingly fails to acknowledge that automated, non-human governance has long been a reality. For more than a century, policy-makers have relied on regulations that automatically adjust to changing circumstances, without the need for human intervention. This Article surveys the track record of self-adjusting governance mechanisms to propose a normative theory of automated regulation. Effective policy-making frequently requires anticipation of future developments, from technology innovation to geopolitical change. Self-adjusting regulation offers an insurance policy against the well-documented inaccuracies of even the most expert forecasts, reducing the need for costly and time-consuming administrative proceedings. Careful analysis of empirical evidence, existing literature, and precedent reveals that the benefits of regulatory automation extend well beyond mitigating regulatory inertia. From a political economy perspective, automated regulation can accommodate a wide range of competing beliefs and assumptions about the future to serve as a catalyst for more consensual policy-making. Public choice theory suggests that the same innate diversity of potential outcomes makes regulatory automation a natural antidote to the domination of special interests in the policy-making process. Todayā€™s automated regulations rely on relatively simplistic algebra, a far cry from the multivariate calculus behind smart algorithms. Harnessing the advanced mathematics and greater predictive powers of artificial intelligence could provide a significant upgrade for the next generation of automated regulation. Any gains in mathematical sophistication, however, will likely come at a cost if the widespread scholarly skepticism toward algorithmic governance is any indication of future backlash and litigation. Policy-makers should consider carefully whether their objectives may be served as well, if not better, through more simplistic, but well-established methods of regulatory automation
    • ā€¦
    corecore