45,471 research outputs found
Probabilistic Error Analysis for Inner Products
Probabilistic models are proposed for bounding the forward error in the
numerically computed inner product (dot product, scalar product) between of two
real -vectors. We derive probabilistic perturbation bounds, as well as
probabilistic roundoff error bounds for the sequential accumulation of the
inner product. These bounds are non-asymptotic, explicit, and make minimal
assumptions on perturbations and roundoffs.
The perturbations are represented as independent, bounded, zero-mean random
variables, and the probabilistic perturbation bound is based on Azuma's
inequality. The roundoffs are also represented as bounded, zero-mean random
variables. The first probabilistic bound assumes that the roundoffs are
independent, while the second one does not. For the latter, we construct a
Martingale that mirrors the sequential order of computations.
Numerical experiments confirm that our bounds are more informative, often by
several orders of magnitude, than traditional deterministic bounds -- even for
small vector dimensions~ and very stringent success probabilities. In
particular the probabilistic roundoff error bounds are functions of
rather than~, thus giving a quantitative confirmation of Wilkinson's
intuition. The paper concludes with a critical assessment of the probabilistic
approach
Probabilistic Polynomials and Hamming Nearest Neighbors
We show how to compute any symmetric Boolean function on variables over
any field (as well as the integers) with a probabilistic polynomial of degree
and error at most . The degree
dependence on and is optimal, matching a lower bound of Razborov
(1987) and Smolensky (1987) for the MAJORITY function. The proof is
constructive: a low-degree polynomial can be efficiently sampled from the
distribution.
This polynomial construction is combined with other algebraic ideas to give
the first subquadratic time algorithm for computing a (worst-case) batch of
Hamming distances in superlogarithmic dimensions, exactly. To illustrate, let
. Suppose we are given a database
of vectors in and a collection of query vectors
in the same dimension. For all , we wish to compute a
with minimum Hamming distance from . We solve this problem in randomized time. Hence, the problem is in "truly subquadratic"
time for dimensions, and in subquadratic time for . We apply the algorithm to computing pairs with maximum
inner product, closest pair in for vectors with bounded integer
entries, and pairs with maximum Jaccard coefficients.Comment: 16 pages. To appear in 56th Annual IEEE Symposium on Foundations of
Computer Science (FOCS 2015
Probabilistic cloning and identification of linearly independent quantum states
We construct a probabilistic quantum cloning machine by a general
unitary-reduction operation. With a postselection of the measurement results,
the machine yields faithful copies of the input states. It is shown that the
states secretly chosen from a certain set can be
probabilistically cloned if and only if , , and are linearly-independent. We
derive the best possible cloning efficiencies. Probabilistic cloning has close
connection with the problem of identification of a set of states, which is a
type of outcome measurement on linearly independent states. The
optimal efficiencies for this type of measurement are obtained.Comment: Extension of quant-ph/9705018, 12pages, latex, to appear in Phys.
Rev. Let
Randomized Local Model Order Reduction
In this paper we propose local approximation spaces for localized model order
reduction procedures such as domain decomposition and multiscale methods. Those
spaces are constructed from local solutions of the partial differential
equation (PDE) with random boundary conditions, yield an approximation that
converges provably at a nearly optimal rate, and can be generated at close to
optimal computational complexity. In many localized model order reduction
approaches like the generalized finite element method, static condensation
procedures, and the multiscale finite element method local approximation spaces
can be constructed by approximating the range of a suitably defined transfer
operator that acts on the space of local solutions of the PDE. Optimal local
approximation spaces that yield in general an exponentially convergent
approximation are given by the left singular vectors of this transfer operator
[I. Babu\v{s}ka and R. Lipton 2011, K. Smetana and A. T. Patera 2016]. However,
the direct calculation of these singular vectors is computationally very
expensive. In this paper, we propose an adaptive randomized algorithm based on
methods from randomized linear algebra [N. Halko et al. 2011], which constructs
a local reduced space approximating the range of the transfer operator and thus
the optimal local approximation spaces. The adaptive algorithm relies on a
probabilistic a posteriori error estimator for which we prove that it is both
efficient and reliable with high probability. Several numerical experiments
confirm the theoretical findings.Comment: 31 pages, 14 figures, 1 table, 1 algorith
A study of pattern recovery in recurrent correlation associative memories
In this paper, we analyze the recurrent correlation associative memory (RCAM) model of Chiueh and Goodman. This is an associative memory in which stored binary memory patterns are recalled via an iterative update rule. The update of the individual pattern-bits is controlled by an excitation function, which takes as its arguement the inner product between the stored memory patterns and the input patterns. Our contribution is to analyze the dynamics of pattern recall when the input patterns are corrupted by noise of a relatively unrestricted class. We make three contributions. First, we show how to identify the excitation function which maximizes the separation (the Fisher discriminant) between the uncorrupted realization of the noisy input pattern and the remaining patterns residing in the memory. Moreover, we show that the excitation function which gives maximum separation is exponential when the input bit-errors follow a binomial distribution. Our second contribution is to develop an expression for the expectation value of bit-error probability on the input pattern after one iteration. We show how to identify the excitation function which minimizes the bit-error probability. However, there is no closed-form solution and the excitation function must be recovered numerically. The relationship between the excitation functions which result from the two different approaches is examined for a binomial distribution of bit-errors. The final contribution is to develop a semiempirical approach to the modeling of the dynamics of the RCAM. This provides us with a numerical means of predicting the recall error rate of the memory. It also allows us to develop an expression for the storage capacity for a given recall error rate
Supervised Classification: Quite a Brief Overview
The original problem of supervised classification considers the task of
automatically assigning objects to their respective classes on the basis of
numerical measurements derived from these objects. Classifiers are the tools
that implement the actual functional mapping from these measurements---also
called features or inputs---to the so-called class label---or output. The
fields of pattern recognition and machine learning study ways of constructing
such classifiers. The main idea behind supervised methods is that of learning
from examples: given a number of example input-output relations, to what extent
can the general mapping be learned that takes any new and unseen feature vector
to its correct class? This chapter provides a basic introduction to the
underlying ideas of how to come to a supervised classification problem. In
addition, it provides an overview of some specific classification techniques,
delves into the issues of object representation and classifier evaluation, and
(very) briefly covers some variations on the basic supervised classification
task that may also be of interest to the practitioner
- …