9,012 research outputs found
Solving the Optimal Mistuning Problem by Symmetry: A General Framework for Extending Flutter Boundaries in Turbomachines via Mistuning
A general framework is presented for analyzing and optimizing stability increases due to mistuning. The framework given is model independent and is based primarily on symmetry arguments. Difficult practical issues are transformed to tractable mathematical questions. It is shown that mistuning analysis reduces to a block circular matrix eigenvalue/vector problem which can be solved efficiently even for large problems. Similarly, the optimization becomes a standard linear constraint quadratic programming problem and can be solved numerically. Since the methods given are model independent, they can be applied to various models and allow the researcher to easily conclude which models accurately capture mistuning, and which do not. A simple quasi-steady model for flutter in a cascade is used to illustrate and validate results in this paper
Perron vector optimization applied to search engines
In the last years, Google's PageRank optimization problems have been
extensively studied. In that case, the ranking is given by the invariant
measure of a stochastic matrix. In this paper, we consider the more general
situation in which the ranking is determined by the Perron eigenvector of a
nonnegative, but not necessarily stochastic, matrix, in order to cover
Kleinberg's HITS algorithm. We also give some results for Tomlin's HOTS
algorithm. The problem consists then in finding an optimal outlink strategy
subject to design constraints and for a given search engine.
We study the relaxed versions of these problems, which means that we should
accept weighted hyperlinks. We provide an efficient algorithm for the
computation of the matrix of partial derivatives of the criterion, that uses
the low rank property of this matrix. We give a scalable algorithm that couples
gradient and power iterations and gives a local minimum of the Perron vector
optimization problem. We prove convergence by considering it as an approximate
gradient method.
We then show that optimal linkage stategies of HITS and HOTS optimization
problems verify a threshold property. We report numerical results on fragments
of the real web graph for these search engine optimization problems.Comment: 28 pages, 5 figure
Differentiable Programming Tensor Networks
Differentiable programming is a fresh programming paradigm which composes
parameterized algorithmic components and trains them using automatic
differentiation (AD). The concept emerges from deep learning but is not only
limited to training neural networks. We present theory and practice of
programming tensor network algorithms in a fully differentiable way. By
formulating the tensor network algorithm as a computation graph, one can
compute higher order derivatives of the program accurately and efficiently
using AD. We present essential techniques to differentiate through the tensor
networks contractions, including stable AD for tensor decomposition and
efficient backpropagation through fixed point iterations. As a demonstration,
we compute the specific heat of the Ising model directly by taking the second
order derivative of the free energy obtained in the tensor renormalization
group calculation. Next, we perform gradient based variational optimization of
infinite projected entangled pair states for quantum antiferromagnetic
Heisenberg model and obtain start-of-the-art variational energy and
magnetization with moderate efforts. Differentiable programming removes
laborious human efforts in deriving and implementing analytical gradients for
tensor network programs, which opens the door to more innovations in tensor
network algorithms and applications.Comment: Typos corrected, discussion and refs added; revised version accepted
for publication in PRX. Source code available at
https://github.com/wangleiphy/tensorgra
Optimization of Gaussian Random Fields
Many engineering systems are subject to spatially distributed uncertainty,
i.e. uncertainty that can be modeled as a random field. Altering the mean or
covariance of this uncertainty will in general change the statistical
distribution of the system outputs. We present an approach for computing the
sensitivity of the statistics of system outputs with respect to the parameters
describing the mean and covariance of the distributed uncertainty. This
sensitivity information is then incorporated into a gradient-based optimizer to
optimize the structure of the distributed uncertainty to achieve desired output
statistics. This framework is applied to perform variance optimization for a
model problem and to optimize the manufacturing tolerances of a gas turbine
compressor blade
- …