20 research outputs found

    IPOs and product quality

    Get PDF
    Given recent public attention paid to high-flying Internet IPOs such as Yahoo and Amazon.com, we explore a product market motive for going public. We develop a model where consumers discern product quality from the stock price. The model predicts that only better-quality firms will go public. Effects of IPO announcements on rival firms' stock prices are related to inferences about market size and market share. The model also predicts that the likelihood of "hot issue" markets depends on the distribution of market size uncertainty and the degree of network externalities present in consumer preferences.published_or_final_versio

    Investment Efficiency and Product Market Competition

    Get PDF
    Does more competition lead to more information production and greater investment efficiency? This question is largely unexplored in the finance literature. This article provides both a model and a series of extensive empirical tests. The model features a 2-stage Bayesian game in differentiated products market competition. We find that competition causes firms to acquire less information and investments become more inefficient relative to a first best case with the same market structure. Empirically the panel regression analysis provides strong support for the theory and shows that investment is more efficient in concentrated industries

    Shedding Light on the Galaxy Luminosity Function

    Full text link
    From as early as the 1930s, astronomers have tried to quantify the statistical nature of the evolution and large-scale structure of galaxies by studying their luminosity distribution as a function of redshift - known as the galaxy luminosity function (LF). Accurately constructing the LF remains a popular and yet tricky pursuit in modern observational cosmology where the presence of observational selection effects due to e.g. detection thresholds in apparent magnitude, colour, surface brightness or some combination thereof can render any given galaxy survey incomplete and thus introduce bias into the LF. Over the last seventy years there have been numerous sophisticated statistical approaches devised to tackle these issues; all have advantages -- but not one is perfect. This review takes a broad historical look at the key statistical tools that have been developed over this period, discussing their relative merits and highlighting any significant extensions and modifications. In addition, the more generalised methods that have emerged within the last few years are examined. These methods propose a more rigorous statistical framework within which to determine the LF compared to some of the more traditional methods. I also look at how photometric redshift estimations are being incorporated into the LF methodology as well as considering the construction of bivariate LFs. Finally, I review the ongoing development of completeness estimators which test some of the fundamental assumptions going into LF estimators and can be powerful probes of any residual systematic effects inherent magnitude-redshift data.Comment: 95 pages, 23 figures, 3 tables. Now published in The Astronomy & Astrophysics Review. This version: bring in line with A&AR format requirements, also minor typo corrections made, additional citations and higher rez images adde
    corecore