7,088 research outputs found

    Identifying hedonic models

    Get PDF
    Economic models for hedonic markets characterize the pricing of bundles of attributes and the demand and supply of these attributes under different assumptions about market structure, preferences and technology. (See Jan Tinbergen, 1956, Sherwin Rosen, 1974 and Dennis Epple, 1987, for contributions to this literature). While the theory is well formulated, and delivers some elegant analytical results, the empirical content of the model is under debate. It is widely believed that hedonic models fit in a single market are fundamentally underidentified and that any empirical content obtained from them is a consequence of arbitrary functional form assumptions. The problem of identification in hedonic models is a prototype for the identification problem in a variety of economic models in which agents sort on unobservable (to the economist) characteristics: models of monopoly pricing (Michael Mussa and Sherwin Rosen, 1978; Robert Wilson, 1993) and models for taxes and labor supply (James Heckman, 1974). Sorting is an essential feature of econometric models of social interactions. (See William Brock and Steven Durlauf, 2001). In this paper we address the sorting problem in hedonic models. Nesheim (2001) extends this analysis to a model with peer effects. In this paper we note that commonly used linearization strategies made to simplify estimation and justify the application of instrumental variables methods, produce identification problems. The hedonic model is generically nonlinear. It is the linearization of a fundamentally nonlinear model that produces the form of the identification problem that dominates discussion in the applied literature. Linearity is an arbitrary and misleading functional form when applied to empirical hedonic models. Our research establishes that even though sorting equilibrium in a single market implies no exclusion restrictions, the hedonic model is generically nonparametrically identified. Instrumental variables and transformation model methods identify economically relevant parameters even 1 without exclusion restrictions. Multimarket data, widely viewed as the most powerful source of identification, achieves this result only under implausible assumptions about why hedonic functions vary across markets

    Identification and estimation of hedonic models

    Get PDF
    This paper considers the identification and estimation of hedonic models. We establish that in an additive version of the hedonic model, technology and preferences are generically nonparametrically identified from data on demand and supply in a single hedonic market. The empirical literature that claims that hedonic models estimated on data from a single market are fundamentally underidentified is based on arbitrary linearizations that do not use all the information in the model. The exact economic model that justifies linear approximations is unappealing. Nonlinearities are generic features of equilibrium in hedonic models and a fundamental and economically motivated source of identification

    Interpreting the evidence on life cycle skill formation

    Get PDF
    This paper presents economic models of child development that capture the essence of recent findings from the empirical literature on skill formation. The goal of this essay is to provide a theoretical framework for interpreting the evidence from a vast empirical literature, for guiding the next generation of empirical studies, and for formulating policy. Central to our analysis is the concept that childhood has more than one stage. We formalize the concepts of self-productivity and complementarity of human capital investments and use them to explain the evidence on skill formation. Together, they explain why skill begets skill through a multiplier process. Skill formation is a life cycle process. It starts in the womb and goes on throughout life. Families play a role in this process that is far more important than the role of schools. There are multiple skills and multiple abilities that are important for adult success. Abilities are both inherited and created, and the traditional debate about nature versus nurture is scientiÞcally obsolete. Human capital investment exhibits both self-productivity and complementarity. Skill attainment at one stage of the life cycle raises skill attainment at later stages of the life cycle (self-productivity). Early investment facilitates the productivity of later investment (complementarity). Early investments are not productive if they are not followed up by later investments (another aspect of complementarity). This complementarity explains why there is no equity-efficiency trade-off for early investment. The returns to investing early in the life cycle are high. Remediation of inadequate early investments is difficult and very costly as a consequence of both self-productivity and complementarity

    The Footprint of F-theory at the LHC

    Full text link
    Recent work has shown that compactifications of F-theory provide a potentially attractive phenomenological scenario. The low energy characteristics of F-theory GUTs consist of a deformation away from a minimal gauge mediation scenario with a high messenger scale. The soft scalar masses of the theory are all shifted by a stringy effect which survives to low energies. This effect can range from 0 GeV up to ~ 500 GeV. In this paper we study potential collider signatures of F-theory GUTs, focussing in particular on ways to distinguish this class of models from other theories with an MSSM spectrum. To accomplish this, we have adapted the general footprint method developed recently for distinguishing broad classes of string vacua to the specific case of F-theory GUTs. We show that with only 5 fb^(-1) of simulated LHC data, it is possible to distinguish many mSUGRA models and low messenger scale gauge mediation models from F-theory GUTs. Moreover, we find that at 5 fb^(-1), the stringy deformation away from minimal gauge mediation produces observable consequences which can also be detected to a level of order ~ +/- 80 GeV. In this way, it is possible to distinguish between models with a large and small stringy deformation. At 50 fb^(-1), this improves to ~ +/- 10 GeV.Comment: 85 pages, 37 figure

    Taking the Easy Way Out: How the GED Testing Program Induces Students to Drop Out

    Get PDF
    We exploit an exogenous increase in General Educational Development (GED) testing requirements to determine whether raising the difficulty of the test causes students to finish high school rather than drop out and GED certify. We find that a six point decrease in GED pass rates induces a 1.3 point decline in overall dropout rates. The effect size is also much larger for older students and minorities. Finally, a natural experiment based on the late introduction of the GED in California reveals, that adopting the program increased the dropout rate by 3 points more relative to other states during the mid-1970s.GED, dropout

    The Discovery of an Active Galactic Nucleus in the Late-type Galaxy NGC 3621: Spitzer Spectroscopic Observations

    Full text link
    We report the discovery of an Active Galactic Nucleus (AGN) in the nearby SAd galaxy NGC 3621 using Spitzer high spectral resolution observations. These observations reveal the presence of [NeV] 14 um and 24 um emission which is centrally concentrated and peaks at the position of the near-infrared nucleus. Using the [NeV] line luminosity, we estimate that the nuclear bolometric luminosity of the AGN is ~ 5 X 10^41 ergs s^-1, which corresponds based on the Eddington limit to a lower mass limit of the black hole of ~ 4 X 10^3 Msun. Using an order of magnitude estimate for the bulge mass based on the Hubble type of the galaxy, we find that this lower mass limit does not put a strain on the well-known relationship between the black hole mass and the host galaxy's stellar velocity dispersion established in predominantly early-type galaxies. Mutli-wavelength follow-up observations of NGC 3621 are required to obtain more precise estimates of the bulge mass, black hole mass, accretion rate, and nuclear bolometric luminosity. The discovery reported here adds to the growing evidence that a black hole can form and grow in a galaxy with no or minimal bulge.Comment: 5 pages, 7 figures, Accepted for publication in ApJ Letter

    On the Escape of Ionizing Radiation from Starbursts

    Full text link
    Far-ultraviolet spectra obtained with FUSEFUSE show that the strong CIIλCII\lambda1036 interstellar absorption-line is essentially black in five of the UV-brightest local starburst galaxies. Since the opacity of the neutral ISM below the Lyman-edge will be significantly larger than in the CIICII line, these data provide strong constraints on the escape of ionizing radiation from these starbursts. Interpreted as a a uniform absorbing slab, the implied optical depth at the Lyman edge is huge (τ0102\tau_0 \geq 10^2). Alternatively, the areal covering factor of opaque material is typically \geq 94%. Thus, the fraction of ionizing stellar photons that escape the ISM of each galaxy is small: our conservative estimates typically yield fesc6f_{esc} \leq 6%. Inclusion of extinction due to dust will further decrease fescf_{esc}. An analogous analysis of the rest-UV spectrum of the star-forming galaxy MS1512CB58MS 1512-CB58 at zz =2.7 leads to similar constraints on fescf_{esc}. These new results agree with the constraints provided by direct observations below the Lyman edge in a few other local starbursts. However, they differ from the recently reported properties of star-forming galaxies at zz \geq 3. We assess the idea that the strong galactic winds seen in many powerful starbursts clear channels through their neutral ISM. We show empirically that such outflows may be a necessary - but not sufficient - part of the process for creating a relatively porous ISM. We note that observations will soon document the cosmic evolution in the contribution of star-forming galaxies to the metagalactic ionizing background, with important implications for the evolution of the IGM.Comment: 17 pages; ApJ, in pres

    Causal Parameters and Policy Analysis in Economcs: A Twentieth Century Retrospective

    Get PDF
    The major contributions of twentieth century econometrics to knowledge were the definition of causal parameters when agents are constrained by resources and markets and causes are interrelated, the analysis of what is required to recover causal parameters from data (the identification problem), and clarification of the role of causal parameters in policy evaluation and in forecasting the effects of policies never previously experienced. This paper summarizes the development of those ideas by the Cowles Commission, the response to their work by structural econometricians and VAR econometricians, and the response to structural and VAR econometrics by calibrators, advocates of natural and social experiments, and by nonparametric econometricians and statisticians.
    corecore