125 research outputs found
Application of Statistical Learning Control to the Design of a Fixed-Order Controller for a Flexible Beam
This paper shows how probabilistic methods and statistical learning theory can provide approximate solutions to “difficult” control problems. The paper also introduces bootstrap learning methods to drastically reduce the bound on the number of samples required to achieve a performance level. These results are then applied to obtain more efficient algorithms which probabilistically guarantee stability and robustness levels when designing controllers for uncertain systems. The paper includes examples of the applications of these methods
Statistical learning control of delay systems: theory and algorithms
Recently, probabilistic methods and statistical learning theory have been shown to provide approximate solutions to “difficult” control problems. Unfortunately, the number of samples required in order to guarantee stringent performance levels may be prohibitively large. In this paper, using recent results by the authors, a more efficient statistical algorithm is presented. Using this algorithm we design static output controllers for a nonlinear plant with uncertain delay
Rank penalized estimation of a quantum system
We introduce a new method to reconstruct the density matrix of a
system of -qubits and estimate its rank from data obtained by quantum
state tomography measurements repeated times. The procedure consists in
minimizing the risk of a linear estimator of penalized by
given rank (from 1 to ), where is previously obtained by the
moment method. We obtain simultaneously an estimator of the rank and the
resulting density matrix associated to this rank. We establish an upper bound
for the error of penalized estimator, evaluated with the Frobenius norm, which
is of order and consistency for the estimator of the rank. The
proposed methodology is computationaly efficient and is illustrated with some
example states and real experimental data sets
Statistical controller design for the linear benchmark problem
In this paper some fixed-order controllers are designed via statistical methods for the benchmark problem originally presented at the 1990 American Control Conference. Based on some recent results by the authors, it is shown that the statistical approach is a valid method to design robust controllers. Two different controllers are proposed and their performance are compared with controllers with the same structure, designed using different techniques
MCRapper: Monte-Carlo Rademacher Averages for Poset Families and Approximate Pattern Mining
We present MCRapper, an algorithm for efficient computation of Monte-Carlo
Empirical Rademacher Averages (MCERA) for families of functions exhibiting
poset (e.g., lattice) structure, such as those that arise in many pattern
mining tasks. The MCERA allows us to compute upper bounds to the maximum
deviation of sample means from their expectations, thus it can be used to find
both statistically-significant functions (i.e., patterns) when the available
data is seen as a sample from an unknown distribution, and approximations of
collections of high-expectation functions (e.g., frequent patterns) when the
available data is a small sample from a large dataset. This feature is a strong
improvement over previously proposed solutions that could only achieve one of
the two. MCRapper uses upper bounds to the discrepancy of the functions to
efficiently explore and prune the search space, a technique borrowed from
pattern mining itself. To show the practical use of MCRapper, we employ it to
develop an algorithm TFP-R for the task of True Frequent Pattern (TFP) mining.
TFP-R gives guarantees on the probability of including any false positives
(precision) and exhibits higher statistical power (recall) than existing
methods offering the same guarantees. We evaluate MCRapper and TFP-R and show
that they outperform the state-of-the-art for their respective tasks
Large-scale Nonlinear Variable Selection via Kernel Random Features
We propose a new method for input variable selection in nonlinear regression.
The method is embedded into a kernel regression machine that can model general
nonlinear functions, not being a priori limited to additive models. This is the
first kernel-based variable selection method applicable to large datasets. It
sidesteps the typical poor scaling properties of kernel methods by mapping the
inputs into a relatively low-dimensional space of random features. The
algorithm discovers the variables relevant for the regression task together
with learning the prediction model through learning the appropriate nonlinear
random feature maps. We demonstrate the outstanding performance of our method
on a set of large-scale synthetic and real datasets.Comment: Final version for proceedings of ECML/PKDD 201
Robustness and Generalization
We derive generalization bounds for learning algorithms based on their
robustness: the property that if a testing sample is "similar" to a training
sample, then the testing error is close to the training error. This provides a
novel approach, different from the complexity or stability arguments, to study
generalization of learning algorithms. We further show that a weak notion of
robustness is both sufficient and necessary for generalizability, which implies
that robustness is a fundamental property for learning algorithms to work
Robust Matrix Completion
This paper considers the problem of recovery of a low-rank matrix in the
situation when most of its entries are not observed and a fraction of observed
entries are corrupted. The observations are noisy realizations of the sum of a
low rank matrix, which we wish to recover, with a second matrix having a
complementary sparse structure such as element-wise or column-wise sparsity. We
analyze a class of estimators obtained by solving a constrained convex
optimization problem that combines the nuclear norm and a convex relaxation for
a sparse constraint. Our results are obtained for the simultaneous presence of
random and deterministic patterns in the sampling scheme. We provide guarantees
for recovery of low-rank and sparse components from partial and corrupted
observations in the presence of noise and show that the obtained rates of
convergence are minimax optimal
Spectral thresholding quantum tomography for low rank states
The estimation of high dimensional quantum states is an important statistical problem arising in current quantum technology applications. A key example is the tomography of multiple ions states, employed in the validation of state preparation in ion trap experiments (Häffner et al 2005 Nature 438 643). Since full tomography becomes unfeasible even for a small number of ions, there is a need to investigate lower dimensional statistical models which capture prior information about the state, and to devise estimation methods tailored to such models. In this paper we propose several new methods aimed at the efficient estimation of low rank states and analyse their performance for multiple ions tomography. All methods consist in first computing the least squares estimator, followed by its truncation to an appropriately chosen smaller rank. The latter is done by setting eigenvalues below a certain 'noise level' to zero, while keeping the rest unchanged, or normalizing them appropriately. We show that (up to logarithmic factors in the space dimension) the mean square error of the resulting estimators scales as where r is the rank, is the dimension of the Hilbert space, and N is the number of quantum samples. Furthermore we establish a lower bound for the asymptotic minimax risk which shows that the above scaling is optimal. The performance of the estimators is analysed in an extensive simulations study, with emphasis on the dependence on the state rank, and the number of measurement repetitions. We find that all estimators perform significantly better than the least squares, with the 'physical estimator' (which is a bona fide density matrix) slightly outperforming the other estimators
- …