51 research outputs found

    Regularized parametric system identification: a decision-theoretic formulation

    Full text link
    Parametric prediction error methods constitute a classical approach to the identification of linear dynamic systems with excellent large-sample properties. A more recent regularized approach, inspired by machine learning and Bayesian methods, has also gained attention. Methods based on this approach estimate the system impulse response with excellent small-sample properties. In several applications, however, it is desirable to obtain a compact representation of the system in the form of a parametric model. By viewing the identification of such models as a decision, we develop a decision-theoretic formulation of the parametric system identification problem that bridges the gap between the classical and regularized approaches above. Using the output-error model class as an illustration, we show that this decision-theoretic approach leads to a regularized method that is robust to small sample-sizes as well as overparameterization.Comment: 10 pages, 8 figure

    Authorial Voice, Implied Audiences and the Drafting of the 1988 AIDS National Mailing

    Get PDF
    Dr. Veeder analyzes changes throughout many drafts of the 1988 ANM and finds that the process of negotiated drafting contributed to its success. She also concludes that risk communicators should focus attention on audience needs rather than competing truth claims

    An Economic Analysis of Privacy Protection and Statistical Accuracy as Social Choices

    Get PDF
    Statistical agencies face a dual mandate to publish accurate statistics while protecting respondent privacy. Increasing privacy protection requires decreased accuracy. Recognizing this as a resource allocation problem, we propose an economic solution: operate where the marginal cost of increasing privacy equals the marginal benefit. Our model of production, from computer science, assumes data are published using an efficient differentially private algorithm. Optimal choice weighs the demand for accurate statistics against the demand for privacy. Examples from U.S. statistical programs show how our framework can guide decision-making. Further progress requires a better understanding of willingness-to-pay for privacy and statistical accuracy

    Systems and Other Minimalism in Britain

    Get PDF
    This collection of essays represents the first international survey of minimalism and postminimalist music from a wide variety of analytical and historical perspectives; its authors include the central scholars in this area. This chapter is the first comprehensive study of the wide variety of minimalist styles in Britain, from the sparse, ‘minimal minimalist’ One Note 1966 by Christopher Hobbs, to repetitive and durational processes that were at first developed experimentally, using random processes (John White’s Machine music) to numerical systems processes, derived from the work of the British Systems Art group. Although there are close ties between the British and American movements (perhaps strengthened by a shared language), the British movement is distinguished by its ties to British systems and op art, and to literature, as well as to the British folk practice of change-ringing. However, the most consistent trait in this music is a sense of play, and playfulness

    Tax Rationality and the Independence of Irrelevant Alternatives

    Get PDF
    Patricia White recently asked a simple question: What is the value of systemic coherence, or rationality, in tax legislation. This is asked along the way of an errand that does not require its answer, but White supposes, in passing, that the value queried is relative: no doubt systemic coherence is desirable, but it may conflict with other values predicable of a tax system, values it cannot invariably trump. This entails, on the one hand, that normative claims for rationality in this sphere regularly imply assignments of weights, and, on the other hand, that systemic coherence ought to be irresistible whenever it can be achieved without prejudice to competing values whenever, that is, other things may be said to be equal. The Internal Revenue Code (hereinafter IRC or the Code ) displays cases of the latter description in which Congress has, nevertheless, contrived to resist rationality\u27s appeal. These represent a highly refined strain of incoherence in the Code. In each of them, coherence could have been achieved without prejudice to any systemic value with which it might otherwise compete. These are not cases in which rationality has been sacrificed for the sake of simplicity, administrative convenience, taxpayer-morale, or any other colorable desideratum. They are cases in which the only principled purchase of incoherence is incoherence. Yet they are evidently not inadvertent: at places in which Congress might easily have stumbled on coherence, it has drawn itself up, and gone carefully around. Such \u27refinement\u27 suggests analogies to the kind of irrationality displayed by decision makers whose preferences violate the standard axioms of decision theory. The goal of this article is to explore one such analogy. It will be argued here that certain provisions of the Code-we shall focus on section 642(g) for illustration-are analogous to violations of what decision theorists sometimes call a sure thing principle.. The analogy yields both a precise account of what evidently goes wrong in these provisions, and a straightforward general characterization of the sense in which they are irrational. The Code\u27s treatment of an item is irrational in this sense if it would be possible to make a book against someone having the same pattern of preferences (for the treatment of that item) in such a way that she would lose out, by her own standards, no matter what happened. On this conception, tax rationality is a kind of formal coherence. It has nothing to say about the ends Congress pursues through taxation; it requires only that a tax scheme be suited to those ends (whatever they are) so as to promote, rather than frustrate, their achievement

    Pacific Weekly, February 25, 1966

    Get PDF
    https://scholarlycommons.pacific.edu/pacifican/2768/thumbnail.jp

    Law and the Art of Modeling: Are Models Facts?

    Get PDF
    In 2013, the Supreme Court made the offhand comment that empirical models and their estimations or predictions are not \u27findings of fact deserving of deference on appeal. The four Justices writing in dissent disagreed, insisting that an assessment of how a model works and its ability to measure what it claims to measure are precisely the kinds of factual findings that the Court, absent clear error cannot disturb. Neither side elaborated on the controversy or defended its position doctrinally or normatively. That the highest Court could split 5-4 on such a crucial issue without even mentioning the stakes or the terms of the debate, suggests that something is amiss in the legal understanding of models and modeling. This Article does what that case failed to do: it tackles the issue head-on, defining the legal status of a scientific model\u27s results and of the assumptions and choices that go into its construction. I argue that as a normative matter models and their conclusions should not be treated like facts. Models are better evaluated by a judge, they do not merit total deference on appeal, and modeling choices are at least somewhat susceptible to analogical reasoning between cases. But I show that as a descriptive matter courts often treat models and their outcomes like issues of fact, despite doctrines like Daubert that encourage serious judicial engagement with modeling. I suggest that a perceived mismatch between ability and task leads judges to take the easier route of treating modeling issues as facts, and I caution that when judges avoid hard questions about modeling, they jeopardize their own power and influence

    Probability and Statistics in the Legal Curriculum: A Case Study in Disciplinary Aspects of Interdisciplinarity

    Get PDF
    This Article considers interdisciplinarity and the legal curriculum in the context of probability and statistics. Section D of Part II begins the discussion by sketching some multidisciplinary, pluridisciplinary, interdisciplinary, and transdisciplinary approaches. Part III is the workhorse of this Article. The particular example used here is the well-known jury discrimination case of Castaneda v. Partida as described in Section A. This case study provides the basis for a crossdisciplinary experience that offers students an opportunity to think about law as a discipline. It is difficult for students to step back and look at law as a discipline when there is no back. The idea of the type of crossdisciplinary education described here is not to make law students intelligent consumers of another discipline, but to provide students with another vantage point for thinking about law. That is, the perspectives given by another discipline can be used to reinforce law as a discipline. This is what is meant by the phrase disciplinary aspects of interdisciplinarity appearing in the title. Studying the various mathematical techniques described in Sections B and C provides the vantage point here. As will be seen in Section D, students can be asked to think about the individual disciplinary components of law and their relationship, as well as the evolution of law as a discipline. Moreover, students can be asked to think about specific legal doctrine such as the nature of the prima facie case, legal reasoning, and the presumption of innocence in the disciplinary framework. This Article concludes with some brief observations on the importance of thinking about law as a discipline
    • …
    corecore