151 research outputs found
Financing Constraints and a Firm's Decision and Ability to Innovate: Establishing Direct and Reverse Effects.
The paper analyzes the existence and impact of financing constraints as a possibly serious obstacle to innovation by .rms. The econometric framework we employ in our study is the simultaneous bivariate probit with mutual endogeneity of direct indicators of financial constraints and innovation decisions by firms. A novel method for establishing coherency conditions is used. It allows us for the first time to estimate models hitherto classified as incoherent through the use of prior sign restrictions on model parameters. We are thus able to quantify the interaction between financing constraints and a firm's decision and ability to innovate without forcing the econometric models to be recursive. Hence, we obtain direct as well as reverse interaction effects, leading us to conclude that binding financing constraints discourage innovation and at the same time innovative firms are more likely to face binding financing constraints.DSGE model ; Currency union ; Heterogeneity ; Matching frictions ; Welfare.
Do the secondary markets believe in life after debt?
Using panel data econometric techniques to examine the case for external debt relief, this report explores the relations between measures of creditworthiness and debt discounts on the secondary markets. It finds, however, that secondary market values tend to reflect past difficulties, not anticipate future ones - so they can't be used to build a case for debt relief. The secondary markets, still in an early evolutionary stage, are quite"thin"and thus unable to exploit efficiently and quickly all available information on creditworthiness.Environmental Economics&Policies,Strategic Debt Management,Economic Theory&Research,Banks&Banking Reform,Financial Intermediation
Individual Characteristics and Stated Preferences for Alternative Energy Sources and Propulsion Technologies in Vehicles: A Discrete Choice Analysis
This paper empirically examines the determinants of the demand for alternative energy sources and propulsion technologies in vehicles. The data stem from a stated preference discrete choice experiment with 598 potential car buyers. In order to simulate a realistic automobile purchase situation, seven alternatives were incorporated in each of the six choice sets, i.e. hybrid, gas, biofuel, hydrogen, and electric as well as the common fuels gasoline and diesel. The vehicle types were additionally characterized by a set of attributes, such as purchase price or motor power. Besides these vehicle attributes, our study particularly considers a multitude of individual characteristics, such as socio-demographic and vehicle purchase variables. The econometric analysis with multinomial probit models identifies some population groups with a higher propensity for alternative energy sources or propulsion technologies in vehicles, which can be focused by policy and automobile firms. For example, younger people and people who usually purchase environment-friendly products have a higher stated preference to purchase biofuel, hydrogen, and electric automobiles than other population groups. Methodologically, our study highlights the importance of the inclusion of taste persistence across the choice sets. Furthermore, it suggests a high number of random draws in the Geweke-Hajivassiliou-Keane simulator, which is incorporated in the simulated maximum likelihood estimation and the simulated testing of statistical hypotheses
The Multinomial Multiperiod Probit Model: Identification and Efficient Estimation
In this paper we discuss parameter identification and likelihood evaluation for multinomial multiperiod Probit models. It is shown in particular that the standard autoregressive specification used in the literature can be interpreted as a latent common factor model. However, this specification is not invariant with respect to the selection of the baseline category. Hence, we propose an alternative specification which is invariant with respect to such a selection and identifies coefficients characterizing the stationary covariance matrix which are not identified in the standard approach. For likelihood evaluation requiring high-dimensional truncated integration we propose to use a generic procedure known as Efficient Importance Sampling (EIS). A special case of our proposed EIS algorithm is the standard GHK probability simulator. To illustrate the relative performance of both procedures we perform a set Monte-Carlo experiments. Our results indicate substantial numerical e?ciency gains of the ML estimates based on GHK-EIS relative to ML estimates obtained by using GHK
Offsetting Versus Mitigation Activities to Reduce CO2 Emissions: A Theoretical and Empirical Analysis for the U.S. And Germany
This paper studies the voluntary provision of public goods that is partially driven by a desire to offset for individual polluting activities. We first extend existing theory and show that offsets allow a reduction in effective environmental pollution levels while not necessarily extending the consumption of a polluting good. We further show a nonmonotonic income-pollution relationship and derive comparative static results for the impact f an increasing environmental preference on purchases of offsets and mitigation activities. Several theoretical results are then econometrically tested using a novel data set on activities to reduce CO2 emissions for the case of vehicle purchases in the U.S. and Germany. We show that an increased environmental preference triggers the use of CO2 offsetting and mitigation channels in both countries. However, we find strong country differences for the purchase of CO2 offsets. While such activities are already triggered by a high general awareness of the climate change problem in the U.S., driver's license holders in Germany need to additionally perceive road traffic as being responsible for CO2 emissions to a large extent
Imputation of continuous variables missing at random using the method of simulated scores
For multivariate datasets with missing values, we present a procedure of statistical inference and state its "optimal" properties. Two main assumptions are needed: (1) data are missing at random (MAR); (2) the data generating process is a multivariate normal linear regression. Disentangling the problem of convergence of the iterative estimation/imputation procedure, we show that the estimator is a "method of simulated scores" (a particular case of McFadden's "method of simulated moments"); thus the estimator is equivalent to maximum likelihood if the number of replications is conveniently large, and the whole procedure can be considered an optimal parametric technique for imputation of missing data
Rapid and Accurate Multiple Testing Correction and Power Estimation for Millions of Correlated Markers
With the development of high-throughput sequencing and genotyping technologies, the number of markers collected in genetic association studies is growing rapidly, increasing the importance of methods for correcting for multiple hypothesis testing. The permutation test is widely considered the gold standard for accurate multiple testing correction, but it is often computationally impractical for these large datasets. Recently, several studies proposed efficient alternative approaches to the permutation test based on the multivariate normal distribution (MVN). However, they cannot accurately correct for multiple testing in genome-wide association studies for two reasons. First, these methods require partitioning of the genome into many disjoint blocks and ignore all correlations between markers from different blocks. Second, the true null distribution of the test statistic often fails to follow the asymptotic distribution at the tails of the distribution. We propose an accurate and efficient method for multiple testing correction in genome-wide association studies—SLIDE. Our method accounts for all correlation within a sliding window and corrects for the departure of the true null distribution of the statistic from the asymptotic distribution. In simulations using the Wellcome Trust Case Control Consortium data, the error rate of SLIDE's corrected p-values is more than 20 times smaller than the error rate of the previous MVN-based methods' corrected p-values, while SLIDE is orders of magnitude faster than the permutation test and other competing methods. We also extend the MVN framework to the problem of estimating the statistical power of an association study with correlated markers and propose an efficient and accurate power estimation method SLIP. SLIP and SLIDE are available at http://slide.cs.ucla.edu
- …