59,368 research outputs found
Algorithm Portfolios for Noisy Optimization
Noisy optimization is the optimization of objective functions corrupted by
noise. A portfolio of solvers is a set of solvers equipped with an algorithm
selection tool for distributing the computational power among them. Portfolios
are widely and successfully used in combinatorial optimization. In this work,
we study portfolios of noisy optimization solvers. We obtain mathematically
proved performance (in the sense that the portfolio performs nearly as well as
the best of its solvers) by an ad hoc portfolio algorithm dedicated to noisy
optimization. A somehow surprising result is that it is better to compare
solvers with some lag, i.e., propose the current recommendation of best solver
based on their performance earlier in the run. An additional finding is a
principled method for distributing the computational power among solvers in the
portfolio.Comment: in Annals of Mathematics and Artificial Intelligence, Springer
Verlag, 201
Solving the G-problems in less than 500 iterations: Improved efficient constrained optimization by surrogate modeling and adaptive parameter control
Constrained optimization of high-dimensional numerical problems plays an
important role in many scientific and industrial applications. Function
evaluations in many industrial applications are severely limited and no
analytical information about objective function and constraint functions is
available. For such expensive black-box optimization tasks, the constraint
optimization algorithm COBRA was proposed, making use of RBF surrogate modeling
for both the objective and the constraint functions. COBRA has shown remarkable
success in solving reliably complex benchmark problems in less than 500
function evaluations. Unfortunately, COBRA requires careful adjustment of
parameters in order to do so.
In this work we present a new self-adjusting algorithm SACOBRA, which is
based on COBRA and capable to achieve high-quality results with very few
function evaluations and no parameter tuning. It is shown with the help of
performance profiles on a set of benchmark problems (G-problems, MOPTA08) that
SACOBRA consistently outperforms any COBRA algorithm with fixed parameter
setting. We analyze the importance of the several new elements in SACOBRA and
find that each element of SACOBRA plays a role to boost up the overall
optimization performance. We discuss the reasons behind and get in this way a
better understanding of high-quality RBF surrogate modeling
Life cycle assessment (LCA) applied to the process industry: a review
Purpose : Life cycle assessment (LCA) methodology is a well-established analytical method to quantify environmental impacts, which has been mainly applied to products. However, recent literature would suggest that it has also the potential as an analysis and design tool for processes, and stresses that one of the biggest challenges of this decade in the field of process systems engineering (PSE) is the development of tools for environmental considerations. Method : This article attempts to give an overview of the integration of LCA methodology in the context of industrial ecology, and focuses on the use of this methodology for environmental considerations concerning process design and optimization. Results : The review identifies that LCA is often used as a multi-objective optimization of processes: practitioners use LCA to obtain the inventory and inject the results into the optimization model. It also shows that most of the LCA studies undertaken on process analysis consider the unit processes as black boxes and build the inventory analysis on fixed operating conditions. Conclusions : The article highlights the interest to better assimilate PSE tools with LCA methodology, in order to produce a more detailed analysis. This will allow optimizing the influence of process operating conditions on environmental impacts and including detailed environmental results into process industry
Modeling Magnetic Field Structure of a Solar Active Region Corona using Nonlinear Force-Free Fields in Spherical Geometry
We test a nonlinear force-free field (NLFFF) optimization code in spherical
geometry using an analytical solution from Low and Lou. Several tests are run,
ranging from idealized cases where exact vector field data are provided on all
boundaries, to cases where noisy vector data are provided on only the lower
boundary (approximating the solar problem). Analytical tests also show that the
NLFFF code in the spherical geometry performs better than that in the Cartesian
one when the field of view of the bottom boundary is large, say, . Additionally, We apply the NLFFF model to an active region
observed by the Helioseismic and Magnetic Imager (HMI) on board the Solar
Dynamics Observatory (SDO) both before and after an M8.7 flare. For each
observation time, we initialize the models using potential field source surface
(PFSS) extrapolations based on either a synoptic chart or a flux-dispersal
model, and compare the resulting NLFFF models. The results show that NLFFF
extrapolations using the flux-dispersal model as the boundary condition have
slightly lower, therefore better, force-free and divergence-free metrics, and
contain larger free magnetic energy. By comparing the extrapolated magnetic
field lines with the extreme ultraviolet (EUV) observations by the Atmospheric
Imaging Assembly (AIA) on board SDO, we find that the NLFFF performs better
than the PFSS not only for the core field of the flare productive region, but
also for large EUV loops higher than 50 Mm.Comment: 34 pages, 8 figures, accepted for publication in Ap
A Parallel Divide-and-Conquer based Evolutionary Algorithm for Large-scale Optimization
Large-scale optimization problems that involve thousands of decision
variables have extensively arisen from various industrial areas. As a powerful
optimization tool for many real-world applications, evolutionary algorithms
(EAs) fail to solve the emerging large-scale problems both effectively and
efficiently. In this paper, we propose a novel Divide-and-Conquer (DC) based EA
that can not only produce high-quality solution by solving sub-problems
separately, but also highly utilizes the power of parallel computing by solving
the sub-problems simultaneously. Existing DC-based EAs that were deemed to
enjoy the same advantages of the proposed algorithm, are shown to be
practically incompatible with the parallel computing scheme, unless some
trade-offs are made by compromising the solution quality.Comment: 12 pages, 0 figure
Optimisation of Mobile Communication Networks - OMCO NET
The mini conference “Optimisation of Mobile Communication Networks” focuses on advanced methods for search and optimisation applied to wireless communication networks. It is sponsored by Research & Enterprise Fund Southampton Solent University.
The conference strives to widen knowledge on advanced search methods capable of optimisation of wireless communications networks. The aim is to provide a forum for exchange of recent knowledge, new ideas and trends in this progressive and challenging area. The conference will popularise new successful approaches on resolving hard tasks such as minimisation of transmit power, cooperative and optimal routing
- …