77,381 research outputs found
Single shot parameter estimation via continuous quantum measurement
We present filtering equations for single shot parameter estimation using
continuous quantum measurement. By embedding parameter estimation in the
standard quantum filtering formalism, we derive the optimal Bayesian filter for
cases when the parameter takes on a finite range of values. Leveraging recent
convergence results [van Handel, arXiv:0709.2216 (2008)], we give a condition
which determines the asymptotic convergence of the estimator. For cases when
the parameter is continuous valued, we develop quantum particle filters as a
practical computational method for quantum parameter estimation.Comment: 9 pages, 5 image
Stochastic Approximations and Perturbations in Forward-Backward Splitting for Monotone Operators
We investigate the asymptotic behavior of a stochastic version of the
forward-backward splitting algorithm for finding a zero of the sum of a
maximally monotone set-valued operator and a cocoercive operator in Hilbert
spaces. Our general setting features stochastic approximations of the
cocoercive operator and stochastic perturbations in the evaluation of the
resolvents of the set-valued operator. In addition, relaxations and not
necessarily vanishing proximal parameters are allowed. Weak and strong almost
sure convergence properties of the iterates is established under mild
conditions on the underlying stochastic processes. Leveraging these results, we
also establish the almost sure convergence of the iterates of a stochastic
variant of a primal-dual proximal splitting method for composite minimization
problems
Frank-Wolfe Algorithms for Saddle Point Problems
We extend the Frank-Wolfe (FW) optimization algorithm to solve constrained
smooth convex-concave saddle point (SP) problems. Remarkably, the method only
requires access to linear minimization oracles. Leveraging recent advances in
FW optimization, we provide the first proof of convergence of a FW-type saddle
point solver over polytopes, thereby partially answering a 30 year-old
conjecture. We also survey other convergence results and highlight gaps in the
theoretical underpinnings of FW-style algorithms. Motivating applications
without known efficient alternatives are explored through structured prediction
with combinatorial penalties as well as games over matching polytopes involving
an exponential number of constraints.Comment: Appears in: Proceedings of the 20th International Conference on
Artificial Intelligence and Statistics (AISTATS 2017). 39 page
Convergence Analysis of Blurring Mean Shift
Blurring mean shift (BMS) algorithm, a variant of the mean shift algorithm,
is a kernel-based iterative method for data clustering, where data points are
clustered according to their convergent points via iterative blurring. In this
paper, we analyze convergence properties of the BMS algorithm by leveraging its
interpretation as an optimization procedure, which is known but has been
underutilized in existing convergence studies. Whereas existing results on
convergence properties applicable to multi-dimensional data only cover the case
where all the blurred data point sequences converge to a single point, this
study provides a convergence guarantee even when those sequences can converge
to multiple points, yielding multiple clusters. This study also shows that the
convergence of the BMS algorithm is fast by further leveraging geometrical
characterization of the convergent points.Comment: Blurring mean shift, mean shift, clustering, convergence, kernel.
arXiv admin note: text overlap with arXiv:2305.0846
Kervolutional Neural Networks
Convolutional neural networks (CNNs) have enabled the state-of-the-art
performance in many computer vision tasks. However, little effort has been
devoted to establishing convolution in non-linear space. Existing works mainly
leverage on the activation layers, which can only provide point-wise
non-linearity. To solve this problem, a new operation, kervolution (kernel
convolution), is introduced to approximate complex behaviors of human
perception systems leveraging on the kernel trick. It generalizes convolution,
enhances the model capacity, and captures higher order interactions of
features, via patch-wise kernel functions, but without introducing additional
parameters. Extensive experiments show that kervolutional neural networks (KNN)
achieve higher accuracy and faster convergence than baseline CNN.Comment: oral paper in CVPR 201
- …