179,248 research outputs found
Sequential minimal optimization for quantum-classical hybrid algorithms
We propose a sequential minimal optimization method for quantum-classical
hybrid algorithms, which converges faster, is robust against statistical error,
and is hyperparameter-free. Specifically, the optimization problem of the
parameterized quantum circuits is divided into solvable subproblems by
considering only a subset of the parameters. In fact, if we choose a single
parameter, the cost function becomes a simple sine curve with period ,
and hence we can exactly minimize with respect to the chosen parameter.
Furthermore, even in general cases, the cost function is given by a simple sum
of trigonometric functions with certain periods and hence can be minimized by
using a classical computer. By repeatedly performing this procedure, we can
optimize the parameterized quantum circuits so that the cost function becomes
as small as possible. We perform numerical simulations and compare the proposed
method with existing gradient-free and gradient-based optimization algorithms.
We find that the proposed method substantially outperforms the existing
optimization algorithms and converges to a solution almost independent of the
initial choice of the parameters. This accelerates almost all quantum-classical
hybrid algorithms readily and would be a key tool for harnessing near-term
quantum devices.Comment: 11 pages, 4 figure
Sensitivity analysis of expensive black-box systems using metamodeling
Simulations are becoming ever more common as a tool for designing complex
products. Sensitivity analysis techniques can be applied to these simulations
to gain insight, or to reduce the complexity of the problem at hand. However,
these simulators are often expensive to evaluate and sensitivity analysis
typically requires a large amount of evaluations. Metamodeling has been
successfully applied in the past to reduce the amount of required evaluations
for design tasks such as optimization and design space exploration. In this
paper, we propose a novel sensitivity analysis algorithm for variance and
derivative based indices using sequential sampling and metamodeling. Several
stopping criteria are proposed and investigated to keep the total number of
evaluations minimal. The results show that both variance and derivative based
techniques can be accurately computed with a minimal amount of evaluations
using fast metamodels and FLOLA-Voronoi or density sequential sampling
algorithms.Comment: proceedings of winter simulation conference 201
A Novel Model of Working Set Selection for SMO Decomposition Methods
In the process of training Support Vector Machines (SVMs) by decomposition
methods, working set selection is an important technique, and some exciting
schemes were employed into this field. To improve working set selection, we
propose a new model for working set selection in sequential minimal
optimization (SMO) decomposition methods. In this model, it selects B as
working set without reselection. Some properties are given by simple proof, and
experiments demonstrate that the proposed method is in general faster than
existing methods.Comment: 8 pages, 12 figures, it was submitted to IEEE International
conference of Tools on Artificial Intelligenc
A Unifying Framework in Vector-valued Reproducing Kernel Hilbert Spaces for Manifold Regularization and Co-Regularized Multi-view Learning
This paper presents a general vector-valued reproducing kernel Hilbert spaces
(RKHS) framework for the problem of learning an unknown functional dependency
between a structured input space and a structured output space. Our formulation
encompasses both Vector-valued Manifold Regularization and Co-regularized
Multi-view Learning, providing in particular a unifying framework linking these
two important learning approaches. In the case of the least square loss
function, we provide a closed form solution, which is obtained by solving a
system of linear equations. In the case of Support Vector Machine (SVM)
classification, our formulation generalizes in particular both the binary
Laplacian SVM to the multi-class, multi-view settings and the multi-class
Simplex Cone SVM to the semi-supervised, multi-view settings. The solution is
obtained by solving a single quadratic optimization problem, as in standard
SVM, via the Sequential Minimal Optimization (SMO) approach. Empirical results
obtained on the task of object recognition, using several challenging datasets,
demonstrate the competitiveness of our algorithms compared with other
state-of-the-art methods.Comment: 72 page
- …
