1,040 research outputs found

    Some equalities for estimations of variance components in a general linear model and its restricted and transformed models

    Get PDF
    AbstractFor the unknown positive parameter σ2 in a general linear model ℳ={y,Xβ,σ2Σ}, the two commonly used estimations are the simple estimator (SE) and the minimum norm quadratic unbiased estimator (MINQUE). In this paper, we derive necessary and sufficient conditions for the equivalence of the SEs and MINQUEs of the variance component σ2 in the original model ℳ, the restricted model ℳr={y,Xβ∣Aβ=b,σ2Σ}, the transformed model ℳt={Ay,AXβ,σ2AΣA′}, and the misspecified model ℳm={y,X0β,σ2Σ0}

    Quasi-Monte Carlo Methods in Stochastic Simulations

    Full text link
    Different stochastic simulation methods are used in order to check the robustness of the outcome of policy simulations with a macroeconometric model. A macroeconometric disequilibriummodel of the West German economy is used to analyze a reform proposal for the tax system. The model was estimated with quarterly data for the period 1960 to 1994, the presently possible margin. Because of nonlinearities confidence intervals for the simulation results have to be obtained by means of stochastic simulations. The main contribution of this paper consists in presenting the simulation results. The robustness of these results is analyzed using different approaches to stochastic simulation. In particular, different methods for the generation of uniform error terms and their conversion to normal variates are applied. These methods include standard approaches as well as quasi-Monte Carlo methods

    A Structural Model of Tenure and Specific Investments

    Get PDF
    Though a lot of work has been done on the distribution of job tenures, we are still uncertain about its main determinants. In this paper, we stress random shocks to match productivity after the start of an employment relation. The specificity of investment makes hiring and separation decisions irreversible.These decisions therefore have an option value. Assumptions on riskneutrality, efficient bargaining, and the efficient resolution of hold up problems allow investment and separation decisions to be analyzed separately from wage setting. The tenure profiles in wages implied by the model fit the observed pattern quite well. The model yields a hump shaped pattern in separation rates, similar to learning models, but with a slowerdecline after the peak. Estimation results using job tenure data from the NLSY support this humped shaped pattern and favor this model above the learning model. We develop a methodology to analyze the decomposition of shocks to match productivity into idiosyncratic and macro-level shocks.When assuming a Last-In-First-Out (LIFO) separation rule, this model of individualemployment relations is embedded in a model of firm level employment, that satisfies Gibrat’s law. The LIFO rule is interpreted as an institution protecting the property rights on specific investments of incumbentworkers against hiring new workers by the firm.option value, job tenure, tenure profiles

    Investigating complex networks with inverse models: analytical aspects of spatial leakage and connectivity estimation

    Full text link
    Network theory and inverse modeling are two standard tools of applied physics, whose combination is needed when studying the dynamical organization of spatially distributed systems from indirect measurements. However, the associated connectivity estimation may be affected by spatial leakage, an artifact of inverse modeling that limits the interpretability of network analysis. This paper investigates general analytical aspects pertaining to this issue. First, the existence of spatial leakage is derived from the topological structure of inverse operators. Then, the geometry of spatial leakage is modeled and used to define a geometric correction scheme, which limits spatial leakage effects in connectivity estimation. Finally, this new approach for network analysis is compared analytically to existing methods based on linear regressions, which are shown to yield biased coupling estimates.Comment: 19 pages, 4 figures, including 5 appendices; v2: minor edits, 1 appendix added; v3: expanded version, v4: minor edit

    Computation of restricted maximum likelihood estimates of variance components

    Get PDF
    The method preferred by animal breeders for the estimation of variance components is restricted maximum likelihood (REML). Various iterative algorithms have been proposed for computing REML estimates. Five different computational strategies for implementing such an algorithm were compared in terms of flops (floating-point operations). These strategies were based respectively on the LDL\u27 decomposition, the W transformation, the SWEEP method, tridiagonalization and diagonalization of the coefficient matrix of the mixed-model equations;The computational requirements of the orthogonal transformations employed in tridiagonalization and diagonalization were found to be rather extensive. However, these transformations are performed prior to the initiation of the iterative estimation process and need not be repeated during the remainder of the process. Subsequent to either diagonalization or tridiagonalization, the flops required per iteration are very minimal. Thus, for most applications of mixed-effects linear models with a single set of random effects, the use of an orthogonal transformation prior to the initiation of the iterative process is recommended. For most animal breeding applications, tridiagonalization will generally be more efficient than diagonalization;In most animal breeding applications, the coefficient matrix of the mixed-model equations is extremely sparse and of very large order. The use of sparse-matrix techniques for the numerical evaluation of the log-likelihood function and its first- and second-order partial derivatives was investigated in the case of the simple sire and animal models. Instead of applying these techniques directly to the coefficient matrix of the mixed-model equations to obtain the Cholesky factor, they were used to obtain the Cholesky factor indirectly by carrying out a QR decomposition of an augmented model matrix;The feasibility of the computational method for the simple sire model was investigated by carrying out the most computationally intensive part of this method (which is the part consisting of the QR decomposition) for an animal breeding data set comprising 180,994 records and 1,264 sires. The total CPU time required for this part (using an NAS AS/9160 computer) was approximately 75,000 seconds

    Image Restoration

    Get PDF
    This book represents a sample of recent contributions of researchers all around the world in the field of image restoration. The book consists of 15 chapters organized in three main sections (Theory, Applications, Interdisciplinarity). Topics cover some different aspects of the theory of image restoration, but this book is also an occasion to highlight some new topics of research related to the emergence of some original imaging devices. From this arise some real challenging problems related to image reconstruction/restoration that open the way to some new fundamental scientific questions closely related with the world we interact with
    corecore