7,540 research outputs found

    An Empirical Model of Industry Dynamics with Common Uncertainty and Learning from the Actions of Competitors

    Get PDF
    This paper advances our collective knowledge about the role of learning in retail agglomeration. Uncertainty about new markets provides an opportunity for sequential learning, where one rm s past entry decisions signal to others the potential pro tability of risky markets. The setting is Canada s hamburger fast food industry from its early days in 1970 to 2005, for which simple analysis of my unique data reveals empirical patterns pointing towards retail agglomeration. The notion that uninformed potential entrants have an incentive to learn, but not informed incumbents, motivates an intuitive double-di¤erence approach that separately identi es learning by exploiting di¤erences in the way potential entrants and incumbents react to spillovers. This identi cation strategy con rms that information externalities are key drivers of agglomeration. Esti- mates from a dynamic oligopoly model of entry with information externalities provide further evidence of learning, as I show that common uncertainty matters. Counterfac- tual analysis reveals that an industry with uncertainty is initially less competitive than an industry with certainty, but catches up over time. Furthermore, there are many instances in which chains enter markets they would have avoided had they not faced uncertainty. Finally, consistent with the interpretation of uncertainty as an entry barrier, I nd that chains place signi cant premiums on certainty at proportions beyond 2% of their total value from being monopolists

    A comparative study of genotype imputation programs

    Get PDF
    Background Genotype imputation infers missing genotypic data computationally, and has been reported to be highly useful in various genetic studies; e.g., genome-wide association studies and genomic selection. Motivation While various genotype imputation programs have been evaluated via different measurements, some, such as Pearson correlation, may not be appropriate for a given context and may result in misleading results. Further, most evaluations of genotype imputation programs are focused on human data. Finally, the most commonly used measurement, concordance, is unable to determine a difference in performance in some cases. Research Questions (1) How do popular genotype imputation programs (i.e., Minimac and Beagle) perform on plant data as compared to human data? (2) Can we find measures that better discriminate imputation performance when concordance does not? and (3) What do alternate measures indicate for the performance of these imputation programs? Methods Since Kullback-Leibler divergence (K-L divergence) and Hellinger distance can aid in ranking statistical inference methods, they can be highly useful in our study. To amplify signals from K-L divergence and Hellinger distance, we obtain their negative logarithmic values (i.e., negative logarithmic K-L divergence (NLKLD) and negative logarithmic Hellinger distance (NLHD)) so that larger values indicate better imputation results. With NLKLD and NLHD, we investigate the performance of two existing genotype imputation programs (i.e., Beagle and Minimac) on data from plants, specifically Arabidopsis thaliana and rice, as well as human. For each pair of organisms to be compared, we select data from one chromosome of each organism such that approximately the same number of samples/participants and SNPs are present for each organism. Finally, we apply different missing rates for target datasets and different sample size ratios between reference and target datasets for sensitivity analysis of the imputation programs. Results We demonstrate that in a general case where single nucleotide polymorphisms (SNPs) with different minor allele frequencies (MAFs) are imputed at the same concordance, both NLKLD and NLHD capture a difference in the imputation performance. Such a difference reflects not only the difference of correspondence between the known and imputed MAFs, but also the difference of chance agreement between the known and imputed genotypes. Additionaly, neither Minimac nor Beagle performs better on either A. thaliana or human data. However, Beagle performs better on human data than on rice data. Finally, the majority of both NLKLD and NLHD results from all experimental data indicate that Minimac outperforms Beagle. Conclusions (1) Although neither Minimac nor Beagle consistently performs better on either plant or human data, Beagle evidently performs better on human data than on rice data; (2) NLKLD and NLHD can be more discriminating than concordance and should be considered in comparing different genotype imputation programs to determine superior imputation methods; and (3) the NLKLD and NLHD results suggest that Minimacā€™s imputation method is superior to Beagleā€™s. Further study can involve confirming these trends with runs on more experimental data

    Exploiting the Choice-Consumption Mismatch: A New Approach to Disentangle State Dependence and Heterogeneity

    Get PDF
    This paper oļ¬€ers a new identiļ¬cation strategy for disentangling structural state dependence from unobserved heterogeneity in preferences. Our strategy exploits market environments where there is a choice-consumption mismatch. We ļ¬rst demonstrate the eļ¬€ectiveness of our identiļ¬cation strategy in obtaining unbiased state dependence estimates via Monte Carlo analysis and highlight its superiority relative to the extant choice-set variation based approach. In an empirical application that uses data of repeat transactions from the car rental industry, we ļ¬nd evidence of structural state dependence, but show that state dependence eļ¬€ects may be overstated without exploiting the choice-consumption mismatches that materialize through free upgrades
    • ā€¦
    corecore