61 research outputs found

    Piecewise linear approximations of the standard normal first order loss function

    Get PDF
    The first order loss function and its complementary function are extensively used in practical settings. When the random variable of interest is normally distributed, the first order loss function can be easily expressed in terms of the standard normal cumulative distribution and probability density function. However, the standard normal cumulative distribution does not admit a closed form solution and cannot be easily linearised. Several works in the literature discuss approximations for either the standard normal cumulative distribution or the first order loss function and their inverse. However, a comprehensive study on piecewise linear upper and lower bounds for the first order loss function is still missing. In this work, we initially summarise a number of distribution independent results for the first order loss function and its complementary function. We then extend this discussion by focusing first on random variable featuring a symmetric distribution, and then on normally distributed random variables. For the latter, we develop effective piecewise linear upper and lower bounds that can be immediately embedded in MILP models. These linearisations rely on constant parameters that are independent of the mean and standard deviation of the normal distribution of interest. We finally discuss how to compute optimal linearisation parameters that minimise the maximum approximation error.Comment: 22 pages, 7 figures, working draf

    Towards Machine Wald

    Get PDF
    The past century has seen a steady increase in the need of estimating and predicting complex systems and making (possibly critical) decisions with limited information. Although computers have made possible the numerical evaluation of sophisticated statistical models, these models are still designed \emph{by humans} because there is currently no known recipe or algorithm for dividing the design of a statistical model into a sequence of arithmetic operations. Indeed enabling computers to \emph{think} as \emph{humans} have the ability to do when faced with uncertainty is challenging in several major ways: (1) Finding optimal statistical models remains to be formulated as a well posed problem when information on the system of interest is incomplete and comes in the form of a complex combination of sample data, partial knowledge of constitutive relations and a limited description of the distribution of input random variables. (2) The space of admissible scenarios along with the space of relevant information, assumptions, and/or beliefs, tend to be infinite dimensional, whereas calculus on a computer is necessarily discrete and finite. With this purpose, this paper explores the foundations of a rigorous framework for the scientific computation of optimal statistical estimators/models and reviews their connections with Decision Theory, Machine Learning, Bayesian Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty Quantification and Information Based Complexity.Comment: 37 page

    Bounding separable recourse functions with limited distribution information

    Full text link
    The recourse function in a stochastic program with recourse can be approximated by separable functions of the original random variables or linear transformations of them. The resulting bound then involves summing simple integrals. These integrals may themselves be difficult to compute or may require more information about the random variables than is available. In this paper, we show that a special class of functions has an easily computable bound that achieves the best upper bound when only first and second moment constraints are available.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/44185/1/10479_2005_Article_BF02204821.pd

    Hire me: Computational Inference of Hirability in Employment Interviews Based on Nonverbal Behavior

    No full text
    Understanding the basis on which recruiters form hirability impressions for a job applicant is a key issue in organizational psychology and can be addressed as a social computing problem. We approach the problem from a face-to-face, nonverbal perspective where behavioral feature extraction and inference are automated. This paper presents a computational framework for the automatic prediction of hirability. To this end, we collected an audio-visual dataset of real job interviews where candidates were applying for a marketing job. We automatically extracted audio and visual behavioral cues related to both the applicant and the interviewer. We then evaluated several regression methods for the prediction of hirability scores and showed the feasibility of conducting such a task, with ridge regression explaining 36.2% of the variance. Feature groups were analyzed, and two main groups of behavioral cues were predictive of hirability: applicant audio features and interviewer visual cues, showing the predictive validity of cues related not only to the applicant, but also to the interviewer. As a last step, we analyzed the predictive validity of psychometric questionnaires often used in the personnel selection process, and found that these questionnaires were unable to predict hirability, suggesting that hirability impressions were formed based on the interaction during the interview rather than on questionnaire data
    corecore