554 research outputs found

    Online Learning of Quantum States

    Full text link
    Suppose we have many copies of an unknown nn-qubit state ρ\rho. We measure some copies of ρ\rho using a known two-outcome measurement E1E_{1}, then other copies using a measurement E2E_{2}, and so on. At each stage tt, we generate a current hypothesis σt\sigma_{t} about the state ρ\rho, using the outcomes of the previous measurements. We show that it is possible to do this in a way that guarantees that Tr(Eiσt)Tr(Eiρ)|\operatorname{Tr}(E_{i} \sigma_{t}) - \operatorname{Tr}(E_{i}\rho) |, the error in our prediction for the next measurement, is at least ε\varepsilon at most O ⁣(n/ε2)\operatorname{O}\!\left(n / \varepsilon^2 \right) times. Even in the "non-realizable" setting---where there could be arbitrary noise in the measurement outcomes---we show how to output hypothesis states that do significantly worse than the best possible states at most O ⁣(Tn)\operatorname{O}\!\left(\sqrt {Tn}\right) times on the first TT measurements. These results generalize a 2007 theorem by Aaronson on the PAC-learnability of quantum states, to the online and regret-minimization settings. We give three different ways to prove our results---using convex optimization, quantum postselection, and sequential fat-shattering dimension---which have different advantages in terms of parameters and portability.Comment: 18 page

    Efficiently Learning from Revealed Preference

    Full text link
    In this paper, we consider the revealed preferences problem from a learning perspective. Every day, a price vector and a budget is drawn from an unknown distribution, and a rational agent buys his most preferred bundle according to some unknown utility function, subject to the given prices and budget constraint. We wish not only to find a utility function which rationalizes a finite set of observations, but to produce a hypothesis valuation function which accurately predicts the behavior of the agent in the future. We give efficient algorithms with polynomial sample-complexity for agents with linear valuation functions, as well as for agents with linearly separable, concave valuation functions with bounded second derivative.Comment: Extended abstract appears in WINE 201

    Efficient Classification for Metric Data

    Full text link
    Recent advances in large-margin classification of data residing in general metric spaces (rather than Hilbert spaces) enable classification under various natural metrics, such as string edit and earthmover distance. A general framework developed for this purpose by von Luxburg and Bousquet [JMLR, 2004] left open the questions of computational efficiency and of providing direct bounds on generalization error. We design a new algorithm for classification in general metric spaces, whose runtime and accuracy depend on the doubling dimension of the data points, and can thus achieve superior classification performance in many common scenarios. The algorithmic core of our approach is an approximate (rather than exact) solution to the classical problems of Lipschitz extension and of Nearest Neighbor Search. The algorithm's generalization performance is guaranteed via the fat-shattering dimension of Lipschitz classifiers, and we present experimental evidence of its superiority to some common kernel methods. As a by-product, we offer a new perspective on the nearest neighbor classifier, which yields significantly sharper risk asymptotics than the classic analysis of Cover and Hart [IEEE Trans. Info. Theory, 1967].Comment: This is the full version of an extended abstract that appeared in Proceedings of the 23rd COLT, 201
    corecore