788 research outputs found

    Fictitious Play with Time-Invariant Frequency Update for Network Security

    Full text link
    We study two-player security games which can be viewed as sequences of nonzero-sum matrix games played by an Attacker and a Defender. The evolution of the game is based on a stochastic fictitious play process, where players do not have access to each other's payoff matrix. Each has to observe the other's actions up to present and plays the action generated based on the best response to these observations. In a regular fictitious play process, each player makes a maximum likelihood estimate of her opponent's mixed strategy, which results in a time-varying update based on the previous estimate and current action. In this paper, we explore an alternative scheme for frequency update, whose mean dynamic is instead time-invariant. We examine convergence properties of the mean dynamic of the fictitious play process with such an update scheme, and establish local stability of the equilibrium point when both players are restricted to two actions. We also propose an adaptive algorithm based on this time-invariant frequency update.Comment: Proceedings of the 2010 IEEE Multi-Conference on Systems and Control (MSC10), September 2010, Yokohama, Japa

    Nanofiber Fabry-Perot microresonator for non-linear optics and cavity quantum electrodynamics

    Full text link
    We experimentally realize a Fabry-Perot-type optical microresonator near the cesium D2 line wavelength based on a tapered optical fiber, equipped with two fiber Bragg gratings which enclose a sub-wavelength diameter waist. Owing to the very low taper losses, the finesse of the resonator reaches F = 86 while the on-resonance transmission is T = 11 %. The characteristics of our resonator fulfill the requirements of non-linear optics and cavity quantum electrodynamics in the strong coupling regime. In combination with its demonstrated ease of use and its advantageous mode geometry, it thus opens a realm of applications.Comment: 4 pages, 3 figure

    Estimation and Inference for Threshold Effects in Panel Data Stochastic Frontier Models

    Get PDF
    One of the most enduring problems in cross-section or panel data models is heterogeneity among individual observations. Different approaches have been proposed to deal with this issue, but threshold regression models offer intuitively appealing econometric methods to account for heterogeneity. We propose three different estimators that can accommodate multiple thresholds. The first two, allowing respectively for fixed and random effects, assume that the firms specific inefficiency scores are time-invariant while the third one allows for time-varying inefficiency scores. We rely on a likelihood ratio test with m − 1 regimes under the null against m regimes. Testing for threshold effects is problematic because of the presence of a nuisance parameter which is not identified under the null hypothesis. This is known as Davies problem. We apply procedures pioneered by Hansen (1999) to test for the presence of threshold effects and to obtain a confidence set for the threshold parameter. These procedures specifically account for Davies problem and are based on non-standard asymptotic theory. Finally, we perform an empirical application of the fixed effects model on a panel of Quebec dairy farms. The specifications involving a trend and the Cobb- Douglas and Translog functional forms support three thresholds or four regimes based on farm size. The efficiency scores vary between 0.95 and 1 in models with and without thresholds. Therefore, productivity differences across farm sizes are most likely due to technological heterogeneity.Stochastic frontier models, threshold regression, technical efficiency, bootstrap, dairy production, C12, C13, C23, C52, Research Methods/ Statistical Methods,

    Pecuniary, Non-Pecuniary, and Downstream Research Spillovers: The Case of Canola

    Get PDF
    This paper develops an empirical framework for estimating a number of inter-firm and downstream research spillovers in the canola crop research industry. The spillovers include basic research, human capital/ knowledge (as measured through other-firm expenditures), and genetics (as measured through yields of other-firms). The model used to examine spillover effects on research productivity provides evidence that there are many positive inter-firm non-pecuniary research spillovers, which is consistent with a research clustering effect. The second model, which examines spillovers at the level of firm revenue , shows that, while private firms tend to crowd one another, public firm expenditure on basic and applied research creates a crowding-in effect for private firms. This model also shows that enhanced intellectual property rights have increased the revenues of private firms. The third model, which examines social value of each firm's output, provides evidence that downstream research spillovers remain important in this modern crop research industry.basic research, applied research, public research expenditures, private research expenditures, biotechnology, Research and Development/Tech Change/Emerging Technologies, O3,

    AN EMPIRICAL ANALYSIS OF PUBLIC AND PRIVATE SPILLOVERS WITHIN THE CANOLA BIOTECH INDUSTRY

    Get PDF
    The study uses firm-specific data in the biotech canola industry to empirically examine research spillovers among public and private firms at the level of research output, research sales revenue, and research social revenue. The non-pecuniary spillovers that are examined include basic research, human capital/ knowledge (as measured through other-firm expenditures) and genetics (as measured through yields of other-firms). The results provide strong empirical evidence of several research spillovers in the biotech crop research industry such as: basic and applied public research creates a positive spillover for private firms at all levels; applied expenditure within-group reduces other-firm revenue while between-group expenditure increases revenue; genetic spillovers within-group have a positive impact on yield but tend to have a negative impact on firm revenue.Agribusiness,

    Mixtures, Moments And Information: Three Essays In Econometrics

    Get PDF
    This thesis is a collection of three independent essays in econometrics.;The first essay uses the empirical characteristic function (ECF) procedure to estimate the parameters of mixtures of normal distributions and switching regression models. The ECF procedure was formally proposed by Feuerverger and Mureika (1977), Heathcote (1977). Since the characteristic function is uniformly bounded, the procedure gives estimates that are numerically stable. Furthermore, it is also shown that the finite sample properties of the ECF estimator are very good, even in the case where the popular maximum likelihood fails to exist.;The second essay applies White\u27s (1982) information matrix (IM) test to a stationary and invertible autoregressive moving average (ARMA) process. Our result indicates that, for ARMA specification, the derived covariance matrix of the indicator vector is not block diagonal implying the algebraic structure of the IM test is more complicated than other cases previously analyzed in the literature (see for example Hall (1987), Bera and Lee (1993)). Our derived IM test turns out to be a joint specification test of parameter heterogeneity (i.e. test for random coefficient or conditional heteroskedasticity) of the specified model and normality.;The final essay compares, using Monte Carlo simulation, the generalized method of moments (GMM) and quasi-maximum likelihood (QML) estimators of the parameter of a simple linear regression model with autoregressive conditional heteroskedastic (ARCH) disturbances. The results reveal that GMM estimates are often biased (apparently due to poor instruments), statistically insignificant, and dynamically unstable (especially the parameters of the ARCH process). On the other hand, QML estimates are generally unbiased, statistically significant and dynamically stable. Asymptotic standard errors for QML are 2 to 6 times smaller than for GMM, depending on the choice of the instruments

    A time-varying true individual effects model with endogenous regressors

    Get PDF
    We propose a fairly general individual effects stochastic frontier model, which allows both heterogeneity and inefficiency to change over time. Moreover, our model handles the endogeneity problems if either at least one of the regressors or one-sided error term is correlated with the two-sided error term. Our Monte Carlo experiments show that our estimator performs well. We employed our methodology to the US banking data and found a negative relationship between return on revenue and cost efficiency. Estimators ignoring time-varying heterogeneity or endogeneity did not perform well and gave very different estimates compared to our estimator

    Unknown Latent Structure and Inefficiency in Panel Stochastic Frontier Models

    Get PDF
    This paper extends the fixed effect panel stochastic frontier models to allow group heterogeneity in the slope coefficients. We propose the first-difference penalized maximum likelihood (FDPML) and control function penalized maximum likelihood (CFPML) methods for classification and estimation of latent group structures in the frontier as well as inefficiency. Monte Carlo simulations show that the proposed approach performs well in finite samples. An empirical application is presented to show the advantages of data-determined identification of the heterogeneous group structures in practice

    On the estimation of zero-inefficiency stochastic frontier models with endogenous regressors

    Get PDF
    In this paper, we investigate endogeneity issues in the zero-inefficiency stochastic frontier (ZISF) models by mean of simultaneous equation setting. Specifically, we allow for one or more regressors to be correlated with the statistical noise. A modified limited information maximum likelihood (LIML) approach is used to estimate the parameters of the model. Moreover, the firm specific inefficiency score is also provided. Limited Monte Carlo simulations show that the proposed estimators perform well in finite sample

    Zero-inefficiency stochastic frontier models with varying mixing proportion:a semiparametric approach

    Get PDF
    In this paper, we propose a semiparametric version of the zero-inefficiency stochastic frontier model of Kumbhakar, Parmeter, and Tsionas (2013) by allowing for the proportion of firms that are fully efficient to depend on a set of covariates via unknown smooth function. We propose a (iterative) backfitting local maximum likelihood estimation procedure that achieves the optimal convergence rates of both frontier parameters and the nonparametric function of the probability of being efficient. We derive the asymptotic bias and variance of the proposed estimator and establish its asymptotic normality. In addition, we discuss how to test for parametric specification of the proportion of firms that are fully efficient as well as how to test for the presence of fully inefficient firms, based on the sieve likelihood ratio statistics. The finite sample behaviors of the proposed estimation procedure and tests are examined using Monte Carlo simulations. An empirical application is further presented to demonstrate the usefulness of the proposed methodology
    corecore