260 research outputs found

    Synchronisation et timing d'ejection du AD

    Get PDF

    Update of transfer timing survey module (TTSM)

    Get PDF

    PAC-Bayesian Bounds for Randomized Empirical Risk Minimizers

    Get PDF
    The aim of this paper is to generalize the PAC-Bayesian theorems proved by Catoni in the classification setting to more general problems of statistical inference. We show how to control the deviations of the risk of randomized estimators. A particular attention is paid to randomized estimators drawn in a small neighborhood of classical estimators, whose study leads to control the risk of the latter. These results allow to bound the risk of very general estimation procedures, as well as to perform model selection

    Learning with Biased Complementary Labels

    Full text link
    In this paper, we study the classification problem in which we have access to easily obtainable surrogate for true labels, namely complementary labels, which specify classes that observations do \textbf{not} belong to. Let YY and Yˉ\bar{Y} be the true and complementary labels, respectively. We first model the annotation of complementary labels via transition probabilities P(Yˉ=i∣Y=j),i≠j∈{1,⋯ ,c}P(\bar{Y}=i|Y=j), i\neq j\in\{1,\cdots,c\}, where cc is the number of classes. Previous methods implicitly assume that P(Yˉ=i∣Y=j),∀i≠jP(\bar{Y}=i|Y=j), \forall i\neq j, are identical, which is not true in practice because humans are biased toward their own experience. For example, as shown in Figure 1, if an annotator is more familiar with monkeys than prairie dogs when providing complementary labels for meerkats, she is more likely to employ "monkey" as a complementary label. We therefore reason that the transition probabilities will be different. In this paper, we propose a framework that contributes three main innovations to learning with \textbf{biased} complementary labels: (1) It estimates transition probabilities with no bias. (2) It provides a general method to modify traditional loss functions and extends standard deep neural network classifiers to learn with biased complementary labels. (3) It theoretically ensures that the classifier learned with complementary labels converges to the optimal one learned with true labels. Comprehensive experiments on several benchmark datasets validate the superiority of our method to current state-of-the-art methods.Comment: ECCV 2018 Ora

    Sparsity and Incoherence in Compressive Sampling

    Get PDF
    We consider the problem of reconstructing a sparse signal x0∈Rnx^0\in\R^n from a limited number of linear measurements. Given mm randomly selected samples of Ux0U x^0, where UU is an orthonormal matrix, we show that ℓ1\ell_1 minimization recovers x0x^0 exactly when the number of measurements exceeds m≄Constâ‹…ÎŒ2(U)⋅S⋅log⁥n, m\geq \mathrm{Const}\cdot\mu^2(U)\cdot S\cdot\log n, where SS is the number of nonzero components in x0x^0, and ÎŒ\mu is the largest entry in UU properly normalized: ÎŒ(U)=n⋅max⁥k,j∣Uk,j∣\mu(U) = \sqrt{n} \cdot \max_{k,j} |U_{k,j}|. The smaller ÎŒ\mu, the fewer samples needed. The result holds for ``most'' sparse signals x0x^0 supported on a fixed (but arbitrary) set TT. Given TT, if the sign of x0x^0 for each nonzero entry on TT and the observed values of Ux0Ux^0 are drawn at random, the signal is recovered with overwhelming probability. Moreover, there is a sense in which this is nearly optimal since any method succeeding with the same probability would require just about this many samples

    Restricted Isometries for Partial Random Circulant Matrices

    Get PDF
    In the theory of compressed sensing, restricted isometry analysis has become a standard tool for studying how efficiently a measurement matrix acquires information about sparse and compressible signals. Many recovery algorithms are known to succeed when the restricted isometry constants of the sampling matrix are small. Many potential applications of compressed sensing involve a data-acquisition process that proceeds by convolution with a random pulse followed by (nonrandom) subsampling. At present, the theoretical analysis of this measurement technique is lacking. This paper demonstrates that the ssth order restricted isometry constant is small when the number mm of samples satisfies m≳(slog⁡n)3/2m \gtrsim (s \log n)^{3/2}, where nn is the length of the pulse. This bound improves on previous estimates, which exhibit quadratic scaling
    • 

    corecore