1,369 research outputs found

    Conditional Value-at-Risk Constraint and Loss Aversion Utility Functions

    Get PDF
    We provide an economic interpretation of the practice consisting in incorporating risk measures as constraints in a classic expected return maximization problem. For what we call the infimum of expectations class of risk measures, we show that if the decision maker (DM) maximizes the expectation of a random return under constraint that the risk measure is bounded above, he then behaves as a ``generalized expected utility maximizer'' in the following sense. The DM exhibits ambiguity with respect to a family of utility functions defined on a larger set of decisions than the original one; he adopts pessimism and performs first a minimization of expected utility over this family, then performs a maximization over a new decisions set. This economic behaviour is called ``Maxmin under risk'' and studied by Maccheroni (2002). This economic interpretation allows us to exhibit a loss aversion factor when the risk measure is the Conditional Value-at-Risk

    Conditional Value-at-Risk Constraint and Loss Aversion Utility Functions

    Get PDF
    We provide an economic interpretation of the practice consisting in incorporating risk measures as constraints in a classic expected return maximization problem. For what we call the infimum of expectations class of risk measures, we show that if the decision maker (DM) maximizes the expectation of a random return under constraint that the risk measure is bounded above, he then behaves as a ``generalized expected utility maximizer'' in the following sense. The DM exhibits ambiguity with respect to a family of utility functions defined on a larger set of decisions than the original one; he adopts pessimism and performs first a minimization of expected utility over this family, then performs a maximization over a new decisions set. This economic behaviour is called ``Maxmin under risk'' and studied by Maccheroni (2002). This economic interpretation allows us to exhibit a loss aversion factor when the risk measure is the Conditional Value-at-Risk.Risk measures; Utility functions; Nonexpected utility theory; Maxmin; Conditional Value-at-Risk; Loss aversion

    Capturing Risk in Capital Budgeting

    Get PDF
    NPS NRP Technical ReportThis proposed research has the goal of proposing novel, reusable, extensible, adaptable, and comprehensive advanced analytical process and Integrated Risk Management to help the (DOD) with risk-based capital budgeting, Monte Carlo risk-simulation, predictive analytics, and stochastic optimization of acquisitions and programs portfolios with multiple competing stakeholders while subject to budgetary, risk, schedule, and strategic constraints. The research covers topics of traditional capital budgeting methodologies used in industry, including the market, cost, and income approaches, and explains how some of these traditional methods can be applied in the DOD by using DOD-centric non-economic, logistic, readiness, capabilities, and requirements variables. Stochastic portfolio optimization with dynamic simulations and investment efficient frontiers will be run for the purposes of selecting the best combination of programs and capabilities is also addressed, as are other alternative methods such as average ranking, risk metrics, lexicographic methods, PROMETHEE, ELECTRE, and others. The results include actionable intelligence developed from an analytically robust case study that senior leadership at the DOD may utilize to make optimal decisions. The main deliverables will be a detailed written research report and presentation brief on the approach of capturing risk and uncertainty in capital budgeting analysis. The report will detail the proposed methodology and applications, as well as a summary case study and examples of how the methodology can be applied.N8 - Integration of Capabilities & ResourcesThis research is supported by funding from the Naval Postgraduate School, Naval Research Program (PE 0605853N/2098). https://nps.edu/nrpChief of Naval Operations (CNO)Approved for public release. Distribution is unlimited.

    Numerical and Evolutionary Optimization 2020

    Get PDF
    This book was established after the 8th International Workshop on Numerical and Evolutionary Optimization (NEO), representing a collection of papers on the intersection of the two research areas covered at this workshop: numerical optimization and evolutionary search techniques. While focusing on the design of fast and reliable methods lying across these two paradigms, the resulting techniques are strongly applicable to a broad class of real-world problems, such as pattern recognition, routing, energy, lines of production, prediction, and modeling, among others. This volume is intended to serve as a useful reference for mathematicians, engineers, and computer scientists to explore current issues and solutions emerging from these mathematical and computational methods and their applications

    How (not) to Evaluate Passenger Routes, Timetables and Line Plans

    Get PDF
    Accurate evaluation of the service quality of public transport is imperative for public transport operators, providers of competing mobility services and policy makers. However, there is no consensus on how public transport should be evaluated. We fill this research gap by presenting a structural approach to evaluate three common manifestations of public transport (route sets, timetables and line plans), considering the two predominant route choice models (shortest path routing and logit routing). The measures for service quality that we derive are consistent with the underlying routing models, are easy to interpret, and can be computed efficiently, providing a ready-to-use framework for evaluating public transport. As a byproduct, our analysis reveals multiple managerial insights

    Preference-Based Trajectory Generation

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/76820/1/AIAA-36214-892.pd

    DESIGN OPTIMIZATION FOR A SUPPORT FOR THE STORAGE RING QUADRUPOLE MAGNET IN A SYNCHROTRON RADIATION FACILITY

    Get PDF
    High brilliance photon beam production requires high gradient magnets. High gradient magnets initiate large magnetic forces to be borne by supports to keep them in position. The objective of this study was to design a support for the CLS 2.0 quadrupole magnet that suppresses vibrations with the goals of the minimal amount of materials and low cost as compared to the existing system. The motivation of this study was associated with upgrading the CLS 2.0’s electron beam, specifically the beam size which will be more than a hundred times smaller than that of the current CLS. The optimization goals of the support design were : (1) Maximizing the natural frequency of the whole magnet system (magnet + supports) and (2) Minimizing the weight of the frame, while meeting the constraints: (1) Static deflection less than 10 microns. (2) Stress developed should be less than the yield stress of the frame material (3) Natural frequency of the system should be more than 50Hz. Such a problem, when translated to the optimization problem, is a large problem as too many design parameters are involved, which makes the “All-In-One (AIO)” strategy of optimization infeasible. This study adopted a divide-and-conquer strategy, i.e., to properly decompose the whole problem into a set of small problems and then optimize them separately. By applying this novel design process, the frame was successfully designed, and the verification showed satisfactory results. The contribution of this work lies in the field of computational design, and specifically, it provides a case demonstration of the divide-and-concur strategy usefulness while optimizing the design for large problems

    Parameters optimization of a charge transport model for the electrical characterization of dielectric materials

    Get PDF
    Un modèle mathématique basé sur la physique des matériaux isolants a été développé dans notre laboratoire pour décrire le transport de charge bipolaire (BCT) dans le polyéthylène basse densité (LDPE) sous contrainte de courant continu. Les phénomènes de piégeage et de dé-piégeage, la hauteur de la barrière pour l'injection, la mobilité et le processus de recombinaison des charges positives et négatives sont considérés. Le modèle est basé sur l'équation de Poisson et la loi de conservation des charges. Ce modèle nécessite des entrées qui sont reliées aux conditions expérimentales telles que la température, la tension appliquée, l'épaisseur du diélectrique, etc., ainsi qu'un l'ensemble de paramètres tels que la barrière d'injection, la mobilité, les coefficients de piégeage et de dé-piégeage. La plupart de ces paramètres ne peuvent être prédits, observés ou estimés par des expériences indépendantes. Pour cette raison, un algorithme d'optimisation est utilisé pour optimiser le modèle BCT afin qu'il s'adapte aux mesures expérimentales, quelles que soient les conditions expérimentales. Le principe de ce type d'algorithme est basé sur la minimisation d'une fonction coût qui rend compte des écarts entre les données issues de l'expérience et celles issues du modèle. Les données expérimentales utilisées sont la densité de charge nette mesurée par la méthode électro-acoustique pulsée (PEA) ainsi que les mesures du courant de charge externe. Après avoir testé cinq algorithmes d'optimisation nous avons sélectionné l'algorithme Trust Region Reflective qui répond au mieux à nos critères. Cet algorithme a permis de trouver un ensemble de paramètres permettant une bonne corrélation entre les densités de courant et de charge simulées et celles obtenues expérimentalement. Cette optimisation a été réalisée en considérant différent champs électriques appliqués au matériau afin d'avoir un jeu de paramètre qui caractérise au mieux le matériau d'étude. En outre, l'algorithme d'optimisation a permis d'analyser la barrière d'injection lorsque les interfaces sont de natures différentes.A mathematical model based on the physics of insulating materials has been developed in our laboratory to describe the bipolar charge transport (BCT) in low-density polyethylene (LDPE) under DC stress. The phenomena of trapping and detrapping, the barrier height for injection, the mobility, and the recombination process of positive and negative charges are considered. The model is based on the Poisson equation and the law of conservation of charges. This model requires inputs that are related to the experimental conditions such as temperature, applied voltage, dielectric thickness, etc., as well as a set of parameters such as the injection barrier, mobility, trapping, and detrapping coefficients. Most of these parameters cannot be predicted, observed, or estimated by independent experiments. For this reason, an optimization algorithm is used to optimize the BCT model to fit the experimental measurements, whatever the experimental conditions, by minimizing the sum of squares of the deviations between the experimental data and the model data. The experimental data used are the net charge density measured by the pulsed electro-acoustic method (PEA) as well as the external charge current measurements. After testing five optimization algorithms we selected the following algorithm Trust Region Reflective which best meets our criteria. This algorithm has allowed us to find a set of parameters allowing a good correlation between the simulated current and charge densities with those obtained experimentally. This optimization was performed by considering different electric fields applied to the material in order to have a unique set of parameters that best characterizes the studied material. In addition, the optimization algorithm allowed to analyze the injection barrier when the interfaces are of different natures

    Combining Prior Knowledge and Data: Beyond the Bayesian Framework

    Get PDF
    For many tasks such as text categorization and control of robotic systems, state-of-the art learning systems can produce results comparable in accuracy to those of human subjects. However, the amount of training data needed for such systems can be prohibitively large for many practical problems. A text categorization system, for example, may need to see many text postings manually tagged with their subjects before it learns to predict the subject of the next posting with high accuracy. A reinforcement learning (RL) system learning how to drive a car needs a lot of experimentation with the actual car before acquiring the optimal policy. An optimizing compiler targeting a certain platform has to construct, compile, and execute many versions of the same code with different optimization parameters to determine which optimizations work best. Such extensive sampling can be time-consuming, expensive (in terms of both expense of the human expertise needed to label data and wear and tear on the robotic equipment used for exploration in case of RL), and sometimes dangerous (e.g., an RL agent driving the car off the cliff to see if it survives the crash). The goal of this work is to reduce the amount of training data an agent needs in order to learn how to perform a task successfully. This is done by providing the system with prior knowledge about its domain. The knowledge is used to bias the agent towards useful solutions and limit the amount of training needed. We explore this task in three contexts: classification (determining the subject of a newsgroup posting), control (learning to perform tasks such as driving a car up the mountain in simulation), and optimization (optimizing performance of linear algebra operations on different hardware platforms). For the text categorization problem, we introduce a novel algorithm which efficiently integrates prior knowledge into large margin classification. We show that prior knowledge simplifies the problem by reducing the size of the hypothesis space. We also provide formal convergence guarantees for our algorithm. For reinforcement learning, we introduce a novel framework for defining planning problems in terms of qualitative statements about the world (e.g., ``the faster the car is going, the more likely it is to reach the top of the mountain''). We present an algorithm based on policy iteration for solving such qualitative problems and prove its convergence. We also present an alternative framework which allows the user to specify prior knowledge quantitatively in form of a Markov Decision Process (MDP). This prior is used to focus exploration on those regions of the world in which the optimal policy is most sensitive to perturbations in transition probabilities and rewards. Finally, in the compiler optimization problem, the prior is based on an analytic model which determines good optimization parameters for a given platform. This model defines a Bayesian prior which, combined with empirical samples (obtained by measuring the performance of optimized code segments), determines the maximum-a-posteriori estimate of the optimization parameters

    Rain Removal in Traffic Surveillance: Does it Matter?

    Get PDF
    Varying weather conditions, including rainfall and snowfall, are generally regarded as a challenge for computer vision algorithms. One proposed solution to the challenges induced by rain and snowfall is to artificially remove the rain from images or video using rain removal algorithms. It is the promise of these algorithms that the rain-removed image frames will improve the performance of subsequent segmentation and tracking algorithms. However, rain removal algorithms are typically evaluated on their ability to remove synthetic rain on a small subset of images. Currently, their behavior is unknown on real-world videos when integrated with a typical computer vision pipeline. In this paper, we review the existing rain removal algorithms and propose a new dataset that consists of 22 traffic surveillance sequences under a broad variety of weather conditions that all include either rain or snowfall. We propose a new evaluation protocol that evaluates the rain removal algorithms on their ability to improve the performance of subsequent segmentation, instance segmentation, and feature tracking algorithms under rain and snow. If successful, the de-rained frames of a rain removal algorithm should improve segmentation performance and increase the number of accurately tracked features. The results show that a recent single-frame-based rain removal algorithm increases the segmentation performance by 19.7% on our proposed dataset, but it eventually decreases the feature tracking performance and showed mixed results with recent instance segmentation methods. However, the best video-based rain removal algorithm improves the feature tracking accuracy by 7.72%.Comment: Published in IEEE Transactions on Intelligent Transportation System
    • …
    corecore