4,257 research outputs found
A Cost-based Optimizer for Gradient Descent Optimization
As the use of machine learning (ML) permeates into diverse application
domains, there is an urgent need to support a declarative framework for ML.
Ideally, a user will specify an ML task in a high-level and easy-to-use
language and the framework will invoke the appropriate algorithms and system
configurations to execute it. An important observation towards designing such a
framework is that many ML tasks can be expressed as mathematical optimization
problems, which take a specific form. Furthermore, these optimization problems
can be efficiently solved using variations of the gradient descent (GD)
algorithm. Thus, to decouple a user specification of an ML task from its
execution, a key component is a GD optimizer. We propose a cost-based GD
optimizer that selects the best GD plan for a given ML task. To build our
optimizer, we introduce a set of abstract operators for expressing GD
algorithms and propose a novel approach to estimate the number of iterations a
GD algorithm requires to converge. Extensive experiments on real and synthetic
datasets show that our optimizer not only chooses the best GD plan but also
allows for optimizations that achieve orders of magnitude performance speed-up.Comment: Accepted at SIGMOD 201
A hybrid GA–PS–SQP method to solve power system valve-point economic dispatch problems
This study presents a new approach based on a hybrid algorithm consisting of Genetic Algorithm (GA), Pattern Search (PS) and Sequential Quadratic Programming (SQP) techniques to solve the well-known power system Economic dispatch problem (ED). GA is the main optimizer of the algorithm, whereas PS and SQP are used to fine tune the results of GA to increase confidence in the solution. For illustrative purposes, the algorithm has been applied to various test systems to assess its effectiveness. Furthermore, convergence characteristics and robustness of the proposed method have been explored through comparison with results reported in literature. The outcome is very encouraging and suggests that the hybrid GA–PS–SQP algorithm is very efficient in solving power system economic dispatch problem
Stochastic Optimization in Econometric Models – A Comparison of GA, SA and RSG
This paper shows that, in case of an econometric model with a high sensitivity to data, using stochastic optimization algorithms is better than using classical gradient techniques. In addition, we showed that the Repetitive Stochastic Guesstimation (RSG) algorithm –invented by Charemza-is closer to Simulated Annealing (SA) than to Genetic Algorithms (GAs), so we produced hybrids between RSG and SA to study their joint behavior. The evaluation of all algorithms involved was performed on a short form of the Romanian macro model, derived from Dobrescu (1996). The subject of optimization was the model’s solution, as function of the initial values (in the first stage) and of the objective functions (in the second stage). We proved that a priori information help “elitist “ algorithms (like RSG and SA) to obtain best results; on the other hand, when one has equal believe concerning the choice among different objective functions, GA gives a straight answer. Analyzing the average related bias of the model’s solution proved the efficiency of the stochastic optimization methods presented.underground economy, Laffer curve, informal activity, fiscal policy, transitionmacroeconomic model, stochastic optimization, evolutionary algorithms, Repetitive Stochastic Guesstimation
Scenario trees and policy selection for multistage stochastic programming using machine learning
We propose a hybrid algorithmic strategy for complex stochastic optimization
problems, which combines the use of scenario trees from multistage stochastic
programming with machine learning techniques for learning a policy in the form
of a statistical model, in the context of constrained vector-valued decisions.
Such a policy allows one to run out-of-sample simulations over a large number
of independent scenarios, and obtain a signal on the quality of the
approximation scheme used to solve the multistage stochastic program. We
propose to apply this fast simulation technique to choose the best tree from a
set of scenario trees. A solution scheme is introduced, where several scenario
trees with random branching structure are solved in parallel, and where the
tree from which the best policy for the true problem could be learned is
ultimately retained. Numerical tests show that excellent trade-offs can be
achieved between run times and solution quality
Approximate Dynamic Programming: An Efficient Machine Learning Algorithm
We propose an efficient machine learning algorithm for two-stage stochastic programs. This machine learning algorithm is termed as projected stochastic hybrid learning algorithm, and consists of stochastic sub-gradient and piecewise linear approximation methods. We use the stochastic sub-gradient and sample information to update the piecewise linear approximation on the objective function. Then we introduce a projection step, which implemented the sub-gradient methods, to jump out from a local optimum, so that we can achieve a global optimum. By the innovative projection step, we show the convergent property of the algorithm for general two-stage stochastic programs. Furthermore, for the network recourse problem, our algorithm can drop the projection steps, but still maintains the convergence property. Thus, if we properly construct the initial piecewise linear functions, the pure piecewise linear approximation method is convergent for general two-stage stochastic programs. The proposed approximate dynamic programming algorithm overcomes the high dimensional state variables using methods from machine learning, and its logic capture the critical ability of the network structure to anticipate the impact of decisions now on the future. The optimization framework, which is carefully calibrated against historical performance, make it possible to introduce changes in the decisions and capture the collective intelligence of the experienced decisions. Computational results indicate that the algorithm exhibits rapid convergence
- …