924 research outputs found

    Subtyping with Generics: A Unified Approach

    Get PDF
    Reusable software increases programmers\u27 productivity and reduces repetitive code and software bugs. Variance is a key programming language mechanism for writing reusable software. Variance is concerned with the interplay of parametric polymorphism (i.e., templates, generics) and subtype (inclusion) polymorphism. Parametric polymorphism enables programmers to write abstract types and is known to enhance the readability, maintainability, and reliability of programs. Subtyping promotes software reuse by allowing code to be applied to a larger set of terms. Integrating parametric and subtype polymorphism while maintaining type safety is a difficult problem. Existing variance mechanisms enable greater subtyping between parametric types, but they suffer from severe deficiencies. They are unable to express several common type abstractions. They can cause a proliferation of types and redundant code. They are difficult for programmers to use due to its inherent complexity. This dissertation aims to improve variance mechanisms in programming languages supporting parametric polymorphism. To address the shortcomings of current mechanisms, I will combine two popular approaches, definition-site variance and use-site variance, in a single programming language. I have developed formal languages or calculi for reasoning about variance. The calculi are example languages supporting both notions of definition-site and use-site variance. They enable stating precise properties that can be proved rigorously. The VarLang calculus demonstrates fundamental issues in variance from a language neutral perspective. The VarJ calculus illustrates realistic complications by modeling a mainstream programming language, Java. VarJ not only supports both notions of use-site and definition-site variance but also language features with complex interactions with variance such as F-bounded polymorphism and wildcard capture. A mapping from Java to VarLang was implemented in software that infers definition-site variance for Java. Large, standard Java libraries (e.g. Oracle\u27s JDK 1.6) were analyzed using the software to compute metrics measuring the benefits of adding definition-site variance to Java, which only supports use-site variance. Applying this technique to six Java generic libraries shows that 21-47% (depending on the library) of generic definitions are inferred to have single-variance; 7-29% of method signatures can be relaxed through this inference, and up to 100% of existing wildcard annotations are unnecessary and can be elided. Although the VarJ calculus proposes how to extend Java with definition-site variance, no mainstream language currently supports both definition-site and use-site variance. To assist programmers with utilizing both notions with existing technology, I developed a refactoring tool that refactors Java code by inferring definition-site variance and adding wildcard annotations. This tool is practical and immediately applicable: It assumes no changes to the Java type system, while taking into account all its intricacies. This system allows users to select declarations (variables, method parameters, return types, etc.) to generalize and considers declarations not declared in available source code. I evaluated our technique on six Java generic libraries. I found that 34% of available declarations of variant type signatures can be generalized-i.e., relaxed with more general wildcard types. On average, 146 other declarations need to be updated when a declaration is generalized, showing that this refactoring would be too tedious and error-prone to perform manually. The result of applying this refactoring is a more general interface that supports greater software reuse

    Suit the action to the word, the word to the action: Hypothetical choices and real decisions in Medicare Part D

    Get PDF
    In recent years, consumer choice has become an important element of public policy. One reason is that consumers differ in their tastes and needs, which they can express most easily through their own choices. Elements that strengthen consumer choice feature prominently in the design of public insurance markets, for instance in the United States in the recent introduction of prescription drug coverage for older individuals via Medicare Part D. For policy makers who design such a market, an important practical question in the design phase of such a new program is how to deduce enrollment and plan selection preferences prior to its introduction. In this paper, we investigate whether hypothetical choice experiments can serve as a tool in this process. We combine data from hypothetical and real plan choices, elicited around the time of the introduction of Medicare Part D. We first analyze how well the hypothetical choice data predict willingness to pay and market shares at the aggregate level. We then analyze predictions at the individual level, in particular how insurance demand varies with observable characteristics. We also explore whether the extent of adverse selection can be predicted using hypothetical choice data alone

    The C++0x "Concepts" Effort

    Full text link
    C++0x is the working title for the revision of the ISO standard of the C++ programming language that was originally planned for release in 2009 but that was delayed to 2011. The largest language extension in C++0x was "concepts", that is, a collection of features for constraining template parameters. In September of 2008, the C++ standards committee voted the concepts extension into C++0x, but then in July of 2009, the committee voted the concepts extension back out of C++0x. This article is my account of the technical challenges and debates within the "concepts" effort in the years 2003 to 2009. To provide some background, the article also describes the design space for constrained parametric polymorphism, or what is colloquially know as constrained generics. While this article is meant to be generally accessible, the writing is aimed toward readers with background in functional programming and programming language theory. This article grew out of a lecture at the Spring School on Generic and Indexed Programming at the University of Oxford, March 2010

    First-Class Subtypes

    Full text link
    First class type equalities, in the form of generalized algebraic data types (GADTs), are commonly found in functional programs. However, first-class representations of other relations between types, such as subtyping, are not yet directly supported in most functional programming languages. We present several encodings of first-class subtypes using existing features of the OCaml language (made more convenient by the proposed modular implicits extension), show that any such encodings are interconvertible, and illustrate the utility of the encodings with several examples.Comment: In Proceedings ML 2017, arXiv:1905.0590

    Why the Linear Utility Function is a Risky Choice in Discrete-Choice Experiments

    Get PDF
    This article assesses how the form of the utility function in discrete-choice experiments (DCEs) affects estimates of willingness-to-pay (WTP). The utility function is usually assumed to be linear in its attributes. Non-linearities, in the guise of interactions and higher-order terms, are applied only rather ad hoc. This paper sheds some light on this issue by showing that the linear utility function can be a risky choice in DCEs. For this purpose, a DCE conducted in Switzerland to assess preferences for statutory social health insurance is estimated in two ways: first, using a linear utility function; and second, using a non-linear utility function specified according to model specification rules from the econometrics and statistics literature. The results show that not only does the non-linear function outperform the linear specification with regard to goodness-of-fit, but it also generates significantly different WTP. Hence, the functional form of the utility function may have significant impact on estimated WTP. In order to produce unbiased estimates of preferences and to make adequate decisions based on DCEs, the form of the utility function should become more prominent in future experiments.Discrete-Choice Experiment, Preference Measurement, Health Insurance, Model Specification

    Generalization bias in science

    Get PDF
    Many scientists routinely generalize from study samples to larger populations. It is commonly assumed that this cognitive process of scientific induction is a voluntary inference in which researchers assess the generalizability of their data and then draw conclusions accordingly. We challenge this view and argue for a novel account. The account describes scientific induction as involving by default a generalization bias that operates automatically and frequently leads researchers to unintentionally generalize their findings without sufficient evidence. The result is unwarranted, overgeneralized conclusions. We support this account of scientific induction by integrating a range of disparate findings from across the cognitive sciences that have until now not been connected to research on the nature of scientific induction. The view that scientific induction involves by default a generalization bias calls for a revision of the current thinking about scientific induction and highlights an overlooked cause of the replication crisis in the sciences. Commonly proposed interventions to tackle scientific overgeneralizations that may feed into this crisis need to be supplemented with cognitive debiasing strategies against generalization bias to most effectively improve science

    Hidden Type Variables and Conditional Extension for More Expressive Generic Programs

    Full text link
    Generic object-oriented programming languages combine parametric polymorphism and nominal subtype polymorphism, thereby providing better data abstraction, greater code reuse, and fewer run-time errors. However, most generic object-oriented languages provide a straightforward combination of the two kinds of polymorphism, which prevents the expression of advanced type relationships. Furthermore, most generic object-oriented languages have a type-erasure semantics: instantiations of type parameters are not available at run time, and thus may not be used by type-dependent operations. This dissertation shows that two features, which allow the expression of many advanced type relationships, can be added to a generic object-oriented programming language without type erasure: 1. type variables that are not parameters of the class that declares them, and 2. extension that is dependent on the satisïŹability of one or more constraints. We refer to the first feature as hidden type variables and the second feature as conditional extension. Hidden type variables allow: covariance and contravariance without variance annotations or special type arguments such as wildcards; a single type to extend, and inherit methods from, infinitely many instantiations of another type; a limited capacity to augment the set of superclasses after that class is defined; and the omission of redundant type arguments. Conditional extension allows the properties of a collection type to be dependent on the properties of its element type. This dissertation describes the semantics and implementation of hidden type variables and conditional extension. A sound type system is presented. In addition, a sound and terminating type checking algorithm is presented. Although designed for the Fortress programming language, hidden type variables and conditional extension can be incorporated into other generic object-oriented languages. Many of the same problems would arise, and solutions analogous to those we present would apply

    Analyzing generic and branded substitution patterns in the Netherlands using prescription data

    Get PDF
    BACKGROUND: As in other societies, pharmaceutical expenditures in the Netherlands are rising every year. As a consequence, needs for cost control are often expressed. One possible solution for cost control could come through increasing generic substitution by pharmacists. We aim to analyse the extent and nature of substitution in recent years and estimate the likelihood of generic or branded substitution in Dutch pharmacies in relation to various characteristics. METHODS: We utilized a linked prescription dataset originating from a general practitioner (GP) and a pharmacy database, both from the northern Netherlands. We selected specific drugs of interest, containing about 55,000 prescriptions from 15 different classes. We used a crossed generalized linear mixed model to estimate the effects that certain patient and pharmacy characteristics as well as timing have on the likelihood that a prescription will eventually be substituted by the pharmacist. RESULTS: Generic substitution occurred at 25% of the branded prescriptions. Generic substitution was more likely to occur earlier in time after patent expiry and to patients that were older and more experienced in their drug use. Individually owned pharmacies had a lower probability of generic substitution compared to chain pharmacies. Oppositely, branded substitution occurred in 10% of generic prescriptions and was positively related to the patients' experience in branded use. Individually owned pharmacies were more likely to substitute a generic drug to a branded compared to other pharmacies. Antidepressant and PPI prescriptions were less prone to generic and more prone to branded substitution. CONCLUSION: Analysis of prescription substitution by the pharmacist revealed strong relations between substitution and patient experience on drug use, pharmacy status and timing. These findings can be utilised to design further strategies to enhance generic substitution

    Essays in empirical industrial organization

    Get PDF
    The field of empirical industrial organization uses data to analyze the structure of industries in the economy by measuring the parameters that drive the behaviors of firms and consumers in these industries. Part of the literature focuses on markets in which firms interact in an imperfectly competitive setting. Research in this field heavily relies on models with a game-theoretic foundation. Many market structure models endogenize the number of firms entering a market. Not only industries but also other types of organizations operate through interactions among their members and can be analyzed with analogous game-theoretic models. The underlying hypothesis is that agents making a certain decision receive a non-negative payoff, conditional on the expectations or actions of other (potential and actual) agents acting in the same environment. These considerations have been crucial in shaping the first two self-contained chapters of this dissertation. In the first chapter, I study the workers' decisions of joining teams within an important scientific experiment. In the second chapter, joint with Laura Grigolon, we provide empirical evidence of the link between common ownership and firms' decisions of entering markets in the Ontarian cancer drug industry. The fact that decision-makers operating in a strategic environment have in expectation non-negative payoffs is parallel to revealed preference arguments at the basis of discrete choice models of consumer behavior. As in the market entry literature, consumers' choices are interpreted as revealing something about an underlying latent utility. By observing how consumers’ decisions change, as their choice sets and market conditions change, one can gain insight into the underlying determinants of consumers' preferences. In the third chapter, joint with Liana Jacobi and Michelle Sovinsky, we analyze the potential complementaritarities in consumption of the so-called sin goods (marijuana, alcohol and tobacco) taking into account persistence in behavior. For the development of this dissertation, I rely on rigorous descriptive analyses and the development and estimation of structural models. With these approaches, it is possible to give informed assessments to policy-makers and in the case of structural models to quantify the impact of feasible policy changes. In Chapter 1, I present an empirical structural model that quantifies the main drivers of endogenous team formation and team performance when the allocation of individuals to teams is decentralized. Many companies currently adopt decentralized approaches to production. These arrangements are widespread in scientific institutions, as fellow researchers typically collaborate on a voluntary basis. The emergence of such arrangements poses several challenges to an economist. First, it is important to understand which elements drive the decision to join projects. Second, it becomes critical to develop tools to correctly measure the performance of teams when the decision to participate in projects is endogenous. These steps are fundamental to assess if decentralization is desirable to obtain successful outcomes with a larger probability. To address these challenges, I use unique data from Virgo, an international collaborative experiment in science. Researchers involved in Virgo choose which project(s) to work on. For the analysis, I use the information on projects' characteristics, outcomes, and participants. I develop and estimate an entry game with incomplete information where heterogeneous agents decide simultaneously whether to join a project. The payoff of joining depends on exogenous project characteristics, including a measure of ex-ante quality, and the expectation on the actions of potential project-mates. Strategic complementarities and substitutabilities can arise in this setting as workers might find beneficial or detrimental to work with others. I measure project outcome in terms of probability of project completion. I find that the pool of expected project-mates drives the decision to join a project while project quality is less important. The larger the pool, the lower the probability of joining a project, as a consequence of the congestion effect due to increasing coordination and communication costs. Heterogeneity in researchers' characteristics explains the selection into projects. I show that controlling for both projects’ ex-ante quality and endogenous project participation matters for obtaining unbiased estimates of teams’ performance. Finally, I consider a counterfactual centralized mechanism in which strategic interactions have no value. I find that this alternative allocation leads to excessive project participation and decreases the probability of project completion. Hence, adopting a decentralized mechanism of project allocation within a firm can be more efficient because workers internalize the costs and benefits of working with each other. In Chapter 2, joint with Laura Grigolon, we document the features of a highly innovative industry characterized by a concentrated ownership structure, the Ontarian cancer drug industry. The analysis has the objective of studying the effect of common ownership on the decision of generic producers to enter the market. Common ownership, namely the practice for large institutional investors of owning stakes in competing firms, has raised the attention of antitrust scholars because the degree of common ownership grew in recent years. Some empirical studies show that it has a large effect on the strategic behavior of companies held by institutional shareholders. Common ownership linkages are a well-established feature of many industries, including the cancer drug industry, for which hospital and public drug program spending in Ontario is dramatically increasing over time. These factors make it an appealing setting to understand the consequences of the common ownership phenomenon. We use unique data on the timing of cancer drug entry in the market (branded and generics) and collect information on patents, drug approvals, and drug indications. We complement our dataset by gathering ownership data mainly from 13F filings. With these data, we empirically assess the presence of common ownership and quantify which components mainly drive the link between common ownership and market entry. In particular, we show that investors' concentration plays an important role in defining common ownership in the years before the entry of a generic in the market. Common ownership may have anticompetitive effects and be harmful to welfare. With the results of this paper, we make the first important step in identifying the target of eventual policy interventions to reduce this practice, for this industry as well as for other innovative industries characterized by a high level of concentration. In Chapter 3, joint with Liana Jacobi and Michelle Sovinsky, we analyze the potential complementarities in use when individuals choose to consume bundles containing marijuana, alcohol, or cigarettes (sin goods), taking into account persistence in consumption for these substances. Two-thirds of Americans are in favor of marijuana legalization. This substance, however, might be consumed in combination with other substances, such as alcohol and tobacco. Moreover, past use of one of the substances might have consequences for the consumption of that substance and other sin goods, especially if one considers complementarity in consumption. Therefore, it is important to understand whether consuming marijuana affects the consumption of other substances and what changes when one considers the potentially addictive nature of these products. We develop and estimate a dynamic model of multi-substance use allowing for persistence in behavior. For the empirical analysis, we uniquely combine data from two primary sources. The first are individual-level panel data from the Panel Study of Income Dynamics (PSID) survey, which contains information on demographics and consumption behaviors of young adolescents in the US. The second source are pricing data for marijuana, alcohol and cigarettes collected from administrative tax data and transaction data. Our parameter estimates show that it is important to account for correlation across unobservables and persistence in behavior when analyzing the decision of using the sin goods in combination. Moreover, we find that the past use of a substance influences not only its current use but also the decision of using the substance together with other substances. Our results provide insightful information on the long-run effect of marijuana legalization for the concurrent and future consumption of potentially substitutable products

    Custom Made Versus Ready to Wear Treatments; Behavioral Propensities in Physician's Choices

    Get PDF
    To customize treatments to individual patients entails costs of coordination and cognition. Thus, providers sometimes choose treatments based on norms for broad classes of patients. We develop behavioral hypotheses explaining when and why doctors customize to the particular patient, and when instead they employ "ready-to-wear" treatments. Our empirical studies examining length of office visits and physician prescribing behavior find evidence of norm-following behavior. Some such behavior, from our studies and from the literature, proves sensible; but other behavior seems far from optimal.
    • 

    corecore