2,407 research outputs found
Leveraging Continuous Material Averaging for Inverse Electromagnetic Design
Inverse electromagnetic design has emerged as a way of efficiently designing
active and passive electromagnetic devices. This maturing strategy involves
optimizing the shape or topology of a device in order to improve a figure of
merit--a process which is typically performed using some form of steepest
descent algorithm. Naturally, this requires that we compute the gradient of a
figure of merit which describes device performance, potentially with respect to
many design variables. In this paper, we introduce a new strategy based on
smoothing abrupt material interfaces which enables us to efficiently compute
these gradients with high accuracy irrespective of the resolution of the
underlying simulation. This has advantages over previous approaches to shape
and topology optimization in nanophotonics which are either prone to gradient
errors or place important constraints on the shape of the device. As a
demonstration of this new strategy, we optimize a non-adiabatic waveguide taper
between a narrow and wide waveguide. This optimization leads to a non-intuitive
design with a very low insertion loss of only 0.041 dB at 1550 nm.Comment: 20 pages, 9 figure
Decentralized spectral resource allocation for OFDMA downlink of coexisting macro/femto networks using filled function method
For an orthogonal frequency division multiple access (OFDMA) downlink of a spectrally coexisting macro and femto network, a resource allocation scheme would aim to maximize the area spectral efficiency (ASE) subject to constraints on the radio resources per transmission interval accessible by each femtocell. An optimal resource allocation scheme for completely decentralized deployments leads however to a nonconvex optimization problem. In this paper, a filled function method is employed to find the global maximum of the optimization problem. Simulation results show that our proposed method is efficient and effective
Index Information Algorithm with Local Tuning for Solving Multidimensional Global Optimization Problems with Multiextremal Constraints
Multidimensional optimization problems where the objective function and the
constraints are multiextremal non-differentiable Lipschitz functions (with
unknown Lipschitz constants) and the feasible region is a finite collection of
robust nonconvex subregions are considered. Both the objective function and the
constraints may be partially defined. To solve such problems an algorithm is
proposed, that uses Peano space-filling curves and the index scheme to reduce
the original problem to a H\"{o}lder one-dimensional one. Local tuning on the
behaviour of the objective function and constraints is used during the work of
the global optimization procedure in order to accelerate the search. The method
neither uses penalty coefficients nor additional variables. Convergence
conditions are established. Numerical experiments confirm the good performance
of the technique.Comment: 29 pages, 5 figure
Deterministic global optimization using space-filling curves and multiple estimates of Lipschitz and Holder constants
In this paper, the global optimization problem with
being a hyperinterval in and satisfying the Lipschitz condition
with an unknown Lipschitz constant is considered. It is supposed that the
function can be multiextremal, non-differentiable, and given as a
`black-box'. To attack the problem, a new global optimization algorithm based
on the following two ideas is proposed and studied both theoretically and
numerically. First, the new algorithm uses numerical approximations to
space-filling curves to reduce the original Lipschitz multi-dimensional problem
to a univariate one satisfying the H\"{o}lder condition. Second, the algorithm
at each iteration applies a new geometric technique working with a number of
possible H\"{o}lder constants chosen from a set of values varying from zero to
infinity showing so that ideas introduced in a popular DIRECT method can be
used in the H\"{o}lder global optimization. Convergence conditions of the
resulting deterministic global optimization method are established. Numerical
experiments carried out on several hundreds of test functions show quite a
promising performance of the new algorithm in comparison with its direct
competitors.Comment: 26 pages, 10 figures, 4 table
A Sequential Descent Method for Global Optimization
In this paper, a sequential search method for finding the global minimum of
an objective function is presented, The descent gradient search is repeated
until the global minimum is obtained. The global minimum is located by a
process of finding progressively better local minima. We determine the set of
points of intersection between the curve of the function and the horizontal
plane which contains the local minima previously found. Then, a point in this
set with the greatest descent slope is chosen to be a initial point for a new
descent gradient search. The method has the descent property and the
convergence is monotonic. To demonstrate the effectiveness of the proposed
sequential descent method, several non-convex multidimensional optimization
problems are solved. Numerical examples show that the global minimum can be
sought by the proposed method of sequential descent
- …