10,584 research outputs found

    Optimal classical simulation of state-independent quantum contextuality

    Full text link
    Simulating quantum contextuality with classical systems requires memory. A fundamental yet open question is what is the minimum memory needed and, therefore, the precise sense in which quantum systems outperform classical ones. Here, we make rigorous the notion of classically simulating quantum state-independent contextuality (QSIC) in the case of a single quantum system submitted to an infinite sequence of measurements randomly chosen from a finite QSIC set. We obtain the minimum memory needed to simulate arbitrary QSIC sets via classical systems under the assumption that the simulation should not contain any oracular information. In particular, we show that, while classically simulating two qubits tested with the Peres-Mermin set requires log2244.585\log_2 24 \approx 4.585 bits, simulating a single qutrit tested with the Yu-Oh set requires, at least, 5.7405.740 bits.Comment: 7 pages, 4 figure

    Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets

    Full text link
    Bayesian optimization has become a successful tool for hyperparameter optimization of machine learning algorithms, such as support vector machines or deep neural networks. Despite its success, for large datasets, training and validating a single configuration often takes hours, days, or even weeks, which limits the achievable performance. To accelerate hyperparameter optimization, we propose a generative model for the validation error as a function of training set size, which is learned during the optimization process and allows exploration of preliminary configurations on small subsets, by extrapolating to the full dataset. We construct a Bayesian optimization procedure, dubbed Fabolas, which models loss and training time as a function of dataset size and automatically trades off high information gain about the global optimum against computational cost. Experiments optimizing support vector machines and deep neural networks show that Fabolas often finds high-quality solutions 10 to 100 times faster than other state-of-the-art Bayesian optimization methods or the recently proposed bandit strategy Hyperband

    Kernel Bayes' rule

    Full text link
    A nonparametric kernel-based method for realizing Bayes' rule is proposed, based on representations of probabilities in reproducing kernel Hilbert spaces. Probabilities are uniquely characterized by the mean of the canonical map to the RKHS. The prior and conditional probabilities are expressed in terms of RKHS functions of an empirical sample: no explicit parametric model is needed for these quantities. The posterior is likewise an RKHS mean of a weighted sample. The estimator for the expectation of a function of the posterior is derived, and rates of consistency are shown. Some representative applications of the kernel Bayes' rule are presented, including Baysian computation without likelihood and filtering with a nonparametric state-space model.Comment: 27 pages, 5 figure
    corecore