21,995 research outputs found
Index Information Algorithm with Local Tuning for Solving Multidimensional Global Optimization Problems with Multiextremal Constraints
Multidimensional optimization problems where the objective function and the
constraints are multiextremal non-differentiable Lipschitz functions (with
unknown Lipschitz constants) and the feasible region is a finite collection of
robust nonconvex subregions are considered. Both the objective function and the
constraints may be partially defined. To solve such problems an algorithm is
proposed, that uses Peano space-filling curves and the index scheme to reduce
the original problem to a H\"{o}lder one-dimensional one. Local tuning on the
behaviour of the objective function and constraints is used during the work of
the global optimization procedure in order to accelerate the search. The method
neither uses penalty coefficients nor additional variables. Convergence
conditions are established. Numerical experiments confirm the good performance
of the technique.Comment: 29 pages, 5 figure
A comparison of computational methods and algorithms for the complex gamma function
A survey and comparison of some computational methods and algorithms for gamma and log-gamma functions of complex arguments are presented. Methods and algorithms reported include Chebyshev approximations, Pade expansion and Stirling's asymptotic series. The comparison leads to the conclusion that Algorithm 421 published in the Communications of ACM by H. Kuki is the best program either for individual application or for the inclusion in subroutine libraries
A New Algorithm for Computing the Actions of Trigonometric and Hyperbolic Matrix Functions
A new algorithm is derived for computing the actions and
, where is cosine, sinc, sine, hyperbolic cosine, hyperbolic
sinc, or hyperbolic sine function. is an matrix and is
with . denotes any matrix square root of
and it is never required to be computed. The algorithm offers six independent
output options given , , , and a tolerance. For each option, actions
of a pair of trigonometric or hyperbolic matrix functions are simultaneously
computed. The algorithm scales the matrix down by a positive integer ,
approximates by a truncated Taylor series, and finally uses the
recurrences of the Chebyshev polynomials of the first and second kind to
recover . The selection of the scaling parameter and the degree of
Taylor polynomial are based on a forward error analysis and a sequence of the
form in such a way the overall computational cost of the
algorithm is optimized. Shifting is used where applicable as a preprocessing
step to reduce the scaling parameter. The algorithm works for any matrix
and its computational cost is dominated by the formation of products of
with matrices that could take advantage of the implementation of
level-3 BLAS. Our numerical experiments show that the new algorithm behaves in
a forward stable fashion and in most problems outperforms the existing
algorithms in terms of CPU time, computational cost, and accuracy.Comment: 4 figures, 16 page
- β¦