30,964 research outputs found

    Fast and optimal solution to the Rankine-Hugoniot problem

    Get PDF
    A new, definitive, reliable and fast iterative method is described for determining the geometrical properties of a shock (i.e., theta sub Bn, yields N, V sub s and M sub A), the conservation constants and the self-consistent asymptotic magnetofluid variables, that uses the three dimensional magnetic field and plasma observations. The method is well conditioned and reliable at all theta sub Bn angles regardless of the shock strength or geometry. Explicit proof of uniqueness of the shock geometry solution by either analytical or graphical methods is given. The method is applied to synthetic and real shocks, including a bow shock event and the results are then compared with those determined by preaveraging methods and other iterative schemes. A complete analysis of the confidence region and error bounds of the solution is also presented

    Application of flexible recipes for model building, batch process optimization and control

    Get PDF
    Unlike the traditionally fixed recipes in batch process operation, flexible recipes allow the adjustment of some of its relevant recipe items. These adjustments can either be predefined in cases of planned experimentation, or suggested by a formal process optimization or control algorithm on the basis of actual information. In both the response surface methodology and the simplex evolutionary operation (EVOP), some well-known methods for empirical model building and process optimization, flexible recipes are involved. Another application of flexible recipes arises in a feedforward quality control strategy of batch processes when variations in market or process conditions are known a priori. The experimental results of these strategies are presented for the batchwise production of benzylalcohol on a pilotplant scale. Experiments have been performed to obtain a reliable model of the yield. On the basis of this model, better process conditions have been suggested, which substantially deviate from the final simplex resulted from experiments within simplex EVOP. Finally, an adaptive feedforward control strategy has been applied for a priori known disturbances in the process inputs

    ESTIMATION OF COST ALLOCATION COEFFICIENTS AT THE FARM LEVEL USING AN ENTROPY APPROACH

    Get PDF
    This paper aims to estimate the farm cost allocation coefficients from whole farm input costs. An entropy approach was developed under a Tobit formulation and was applied to a sample of farms from the 2004 FADN data base for Alentejo region, Southern Portugal. A Generalized Maximum Entropy model and Cross Generalized Entropy model were developed to the sample conditions and were tested. Model results were assessed in terms of their precision and estimation power and were compared with observed data. The entropy approach showed to be a flexible and valid tool to estimate incomplete information, namely regarding farm costs. Keywords: Generalized maximum entropy; costs; estimation; Alentejo, FADN

    Should one compute the Temporal Difference fix point or minimize the Bellman Residual? The unified oblique projection view

    Get PDF
    We investigate projection methods, for evaluating a linear approximation of the value function of a policy in a Markov Decision Process context. We consider two popular approaches, the one-step Temporal Difference fix-point computation (TD(0)) and the Bellman Residual (BR) minimization. We describe examples, where each method outperforms the other. We highlight a simple relation between the objective function they minimize, and show that while BR enjoys a performance guarantee, TD(0) does not in general. We then propose a unified view in terms of oblique projections of the Bellman equation, which substantially simplifies and extends the characterization of (schoknecht,2002) and the recent analysis of (Yu & Bertsekas, 2008). Eventually, we describe some simulations that suggest that if the TD(0) solution is usually slightly better than the BR solution, its inherent numerical instability makes it very bad in some cases, and thus worse on average

    From Fixed-X to Random-X Regression: Bias-Variance Decompositions, Covariance Penalties, and Prediction Error Estimation

    Full text link
    In statistical prediction, classical approaches for model selection and model evaluation based on covariance penalties are still widely used. Most of the literature on this topic is based on what we call the "Fixed-X" assumption, where covariate values are assumed to be nonrandom. By contrast, it is often more reasonable to take a "Random-X" view, where the covariate values are independently drawn for both training and prediction. To study the applicability of covariance penalties in this setting, we propose a decomposition of Random-X prediction error in which the randomness in the covariates contributes to both the bias and variance components. This decomposition is general, but we concentrate on the fundamental case of least squares regression. We prove that in this setting the move from Fixed-X to Random-X prediction results in an increase in both bias and variance. When the covariates are normally distributed and the linear model is unbiased, all terms in this decomposition are explicitly computable, which yields an extension of Mallows' Cp that we call RCpRCp. RCpRCp also holds asymptotically for certain classes of nonnormal covariates. When the noise variance is unknown, plugging in the usual unbiased estimate leads to an approach that we call RCp^\hat{RCp}, which is closely related to Sp (Tukey 1967), and GCV (Craven and Wahba 1978). For excess bias, we propose an estimate based on the "shortcut-formula" for ordinary cross-validation (OCV), resulting in an approach we call RCp+RCp^+. Theoretical arguments and numerical simulations suggest that RCP+RCP^+ is typically superior to OCV, though the difference is small. We further examine the Random-X error of other popular estimators. The surprising result we get for ridge regression is that, in the heavily-regularized regime, Random-X variance is smaller than Fixed-X variance, which can lead to smaller overall Random-X error
    • …
    corecore