10,671 research outputs found
An Entropy Search Portfolio for Bayesian Optimization
Bayesian optimization is a sample-efficient method for black-box global
optimization. How- ever, the performance of a Bayesian optimization method very
much depends on its exploration strategy, i.e. the choice of acquisition
function, and it is not clear a priori which choice will result in superior
performance. While portfolio methods provide an effective, principled way of
combining a collection of acquisition functions, they are often based on
measures of past performance which can be misleading. To address this issue, we
introduce the Entropy Search Portfolio (ESP): a novel approach to portfolio
construction which is motivated by information theoretic considerations. We
show that ESP outperforms existing portfolio methods on several real and
synthetic problems, including geostatistical datasets and simulated control
tasks. We not only show that ESP is able to offer performance as good as the
best, but unknown, acquisition function, but surprisingly it often gives better
performance. Finally, over a wide range of conditions we find that ESP is
robust to the inclusion of poor acquisition functions.Comment: 10 pages, 5 figure
Parallel ADMM for robust quadratic optimal resource allocation problems
An alternating direction method of multipliers (ADMM) solver is described for
optimal resource allocation problems with separable convex quadratic costs and
constraints and linear coupling constraints. We describe a parallel
implementation of the solver on a graphics processing unit (GPU) using a
bespoke quartic function minimizer. An application to robust optimal energy
management in hybrid electric vehicles is described, and the results of
numerical simulations comparing the computation times of the parallel GPU
implementation with those of an equivalent serial implementation are presented
A Robust Statistics Approach to Minimum Variance Portfolio Optimization
We study the design of portfolios under a minimum risk criterion. The
performance of the optimized portfolio relies on the accuracy of the estimated
covariance matrix of the portfolio asset returns. For large portfolios, the
number of available market returns is often of similar order to the number of
assets, so that the sample covariance matrix performs poorly as a covariance
estimator. Additionally, financial market data often contain outliers which, if
not correctly handled, may further corrupt the covariance estimation. We
address these shortcomings by studying the performance of a hybrid covariance
matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's
shrinkage estimator while assuming samples with heavy-tailed distribution.
Employing recent results from random matrix theory, we develop a consistent
estimator of (a scaled version of) the realized portfolio risk, which is
minimized by optimizing online the shrinkage intensity. Our portfolio
optimization method is shown via simulations to outperform existing methods
both for synthetic and real market data
The role of learning on industrial simulation design and analysis
The capability of modeling real-world system operations has turned simulation into an indispensable problemsolving methodology for business system design and analysis. Today, simulation supports decisions ranging
from sourcing to operations to finance, starting at the strategic level and proceeding towards tactical and
operational levels of decision-making. In such a dynamic setting, the practice of simulation goes beyond
being a static problem-solving exercise and requires integration with learning. This article discusses the role
of learning in simulation design and analysis motivated by the needs of industrial problems and describes
how selected tools of statistical learning can be utilized for this purpose
Basic Enhancement Strategies When Using Bayesian Optimization for Hyperparameter Tuning of Deep Neural Networks
Compared to the traditional machine learning models, deep neural networks (DNN) are known to be highly sensitive to the choice of hyperparameters. While the required time and effort for manual tuning has been rapidly decreasing for the well developed and commonly used DNN architectures, undoubtedly DNN hyperparameter optimization will continue to be a major burden whenever a new DNN architecture needs to be designed, a new task needs to be solved, a new dataset needs to be addressed, or an existing DNN needs to be improved further. For hyperparameter optimization of general machine learning problems, numerous automated solutions have been developed where some of the most popular solutions are based on Bayesian Optimization (BO). In this work, we analyze four fundamental strategies for enhancing BO when it is used for DNN hyperparameter optimization. Specifically, diversification, early termination, parallelization, and cost function transformation are investigated. Based on the analysis, we provide a simple yet robust algorithm for DNN hyperparameter optimization - DEEP-BO (Diversified, Early-termination-Enabled, and Parallel Bayesian Optimization). When evaluated over six DNN benchmarks, DEEP-BO mostly outperformed well-known solutions including GP-Hedge, BOHB, and the speed-up variants that use Median Stopping Rule or Learning Curve Extrapolation. In fact, DEEP-BO consistently provided the top, or at least close to the top, performance over all the benchmark types that we have tested. This indicates that DEEP-BO is a robust solution compared to the existing solutions. The DEEP-BO code is publicly available at <uri>https://github.com/snu-adsl/DEEP-BO</uri>
Portfolio implementation risk management using evolutionary multiobjective optimization
Portfoliomanagementbasedonmean-varianceportfoliooptimizationissubjecttodifferent sources of uncertainty. In addition to those related to the quality of parameter estimates used in the optimization process, investors face a portfolio implementation risk. The potential temporary discrepancybetweentargetandpresentportfolios,causedbytradingstrategies,mayexposeinvestors to undesired risks. This study proposes an evolutionary multiobjective optimization algorithm aiming at regions with solutions more tolerant to these deviations and, therefore, more reliable. The proposed approach incorporates a user’s preference and seeks a fine-grained approximation of the most relevant efficient region. The computational experiments performed in this study are based on a cardinality-constrained problem with investment limits for eight broad-category indexes and 15 years of data. The obtained results show the ability of the proposed approach to address the robustness issue and to support decision making by providing a preferred part of the efficient set. The results reveal that the obtained solutions also exhibit a higher tolerance to prediction errors in asset returns and variance–covariance matrix.Sandra Garcia-Rodriguez and David Quintana acknowledge financial support granted by the Spanish Ministry of Economy and Competitivity under grant ENE2014-56126-C2-2-R. Roman Denysiuk and Antonio Gaspar-Cunha were supported by the Portuguese Foundation for Science and Technology under grant PEst-C/CTM/LA0025/2013 (Projecto Estratégico-LA 25-2013-2014-Strategic Project-LA 25-2013-2014).info:eu-repo/semantics/publishedVersio
- …