1,504,395 research outputs found

    Analysis-of-marginal-Tail-Means (ATM): a robust method for discrete black-box optimization

    Full text link
    We present a new method, called Analysis-of-marginal-Tail-Means (ATM), for effective robust optimization of discrete black-box problems. ATM has important applications to many real-world engineering problems (e.g., manufacturing optimization, product design, molecular engineering), where the objective to optimize is black-box and expensive, and the design space is inherently discrete. One weakness of existing methods is that they are not robust: these methods perform well under certain assumptions, but yield poor results when such assumptions (which are difficult to verify in black-box problems) are violated. ATM addresses this via the use of marginal tail means for optimization, which combines both rank-based and model-based methods. The trade-off between rank- and model-based optimization is tuned by first identifying important main effects and interactions, then finding a good compromise which best exploits additive structure. By adaptively tuning this trade-off from data, ATM provides improved robust optimization over existing methods, particularly in problems with (i) a large number of factors, (ii) unordered factors, or (iii) experimental noise. We demonstrate the effectiveness of ATM in simulations and in two real-world engineering problems: the first on robust parameter design of a circular piston, and the second on product family design of a thermistor network

    Data Envelopment Analysis as a Complement to Marginal Analysis

    Get PDF
    The consideration in the present study is mainly conceptual. The objective is to show how Data Envelopment Analysis (DEA) can be used to reveal the true input-output relations in an industry. In the estimation of a production function it is assumed that all firms use the existing technology efficiently. However, in the real world the observed firms produce homogeneous outputs with differences in factor intensities and in managerial capacity. Hence, inefficiencies are hidden in the estimated production functions. In order to overcome this drawback of the parametric approach and to reveal the true nature of the input-output relations in production, given the available technology, the DEA approach is applied. In this study DEA is applied in order to select the farms that utilize efficiently the existing technology, allowing the estimation of a production function that reveals the true input-output relations in sheep-goat farming, using farm accounting data from a sample of 108 sheep-goat farms.Research Methods/ Statistical Methods,

    A test of collusive behavior based on incentives

    Get PDF
    This paper proposes a novel collusion test based on the analysis of incentives faced by each firm in a colluding coalition. In fact, once collusion is in effect, each colluding firm faces the incentive to secretly deviate from the agreement, since it thereby increases its profits, although the colluding firms’ joint profit decreases. Thus, in a colluding coalition each firm has marginal revenues, calculated with Nash conjectures, which are larger than its marginal costs. The collusion test is based on the rejection of the null hypothesis that the firm marginal revenues with Nash conjectures are equal to or less than its marginal costs.info:eu-repo/semantics/publishedVersio

    Overlearning in marginal distribution-based ICA: analysis and solutions

    Get PDF
    The present paper is written as a word of caution, with users of independent component analysis (ICA) in mind, to overlearning phenomena that are often observed.\\ We consider two types of overlearning, typical to high-order statistics based ICA. These algorithms can be seen to maximise the negentropy of the source estimates. The first kind of overlearning results in the generation of spike-like signals, if there are not enough samples in the data or there is a considerable amount of noise present. It is argued that, if the data has power spectrum characterised by 1/f1/f curve, we face a more severe problem, which cannot be solved inside the strict ICA model. This overlearning is better characterised by bumps instead of spikes. Both overlearning types are demonstrated in the case of artificial signals as well as magnetoencephalograms (MEG). Several methods are suggested to circumvent both types, either by making the estimation of the ICA model more robust or by including further modelling of the data

    ALTERNATIVE PRICE SPECIFICATION FOR MUNICIPAL WATER DEMANDS: AN EMPIRICAL TEST

    Get PDF
    Based on data from 92 Minnesota cities, the analyses shows that neither marginal price or average price appear as the better predictor of demand. The price elasticity of demand ranges from -. 17 for marginal price in the linear model to -.27 for average price in the log linear model. It appears from the analysis that many consumers are unaware of the marginal price of their water. Thus utilities should simplify their pricing structures and present consumers with an easy to understand costs of water such as the cost of six hours of lawn watering.Demand and Price Analysis, Public Economics,

    Estimating Marginal Returns to Education

    Get PDF
    This paper estimates the marginal returns to college for individuals induced to enroll in college by different marginal policy changes. The recent instrumental variables literature seeks to estimate this parameter, but in general it does so only under strong assumptions that are tested and found wanting. We show how to utilize economic theory and local instrumental variables estimators to estimate the effect of marginal policy changes. Our empirical analysis shows that returns are higher for individuals more likely to attend college. We contrast the returns to well-defined marginal policy changes with IV estimates of the return to schooling. Some marginal policy changes inducing students into college produce very low returns.marginal treatment effect, returns to schooling, marginal return, average return

    Bayesian Model Selection Based on Proper Scoring Rules

    Full text link
    Bayesian model selection with improper priors is not well-defined because of the dependence of the marginal likelihood on the arbitrary scaling constants of the within-model prior densities. We show how this problem can be evaded by replacing marginal log-likelihood by a homogeneous proper scoring rule, which is insensitive to the scaling constants. Suitably applied, this will typically enable consistent selection of the true model.Comment: Published at http://dx.doi.org/10.1214/15-BA942 in the Bayesian Analysis (http://projecteuclid.org/euclid.ba) by the International Society of Bayesian Analysis (http://bayesian.org/
    corecore