7,203 research outputs found

    Final measurement of Bs0B^0_s mixing phase in the full CDF Run II data set

    Get PDF
    We report the final CDF measurement of the Bs0B^0_s mixing phase, mean lifetime, and decay-width difference through the fit of the time evolution of flavor-tagged Bs0J/ψϕB^0_s \rightarrow J/\psi \phi decays. The measurement is based on the full data set of 1.96 TeV ppˉp\bar{p} collisions collected between February 2002 and September 2011 by the CDF experiment. The results are consistent with the standard model and other experimental determinations and are amongst the most precise to date.Comment: 3 pages, 2 figures, conference IFAE 201

    Active Nearest-Neighbor Learning in Metric Spaces

    Full text link
    We propose a pool-based non-parametric active learning algorithm for general metric spaces, called MArgin Regularized Metric Active Nearest Neighbor (MARMANN), which outputs a nearest-neighbor classifier. We give prediction error guarantees that depend on the noisy-margin properties of the input sample, and are competitive with those obtained by previously proposed passive learners. We prove that the label complexity of MARMANN is significantly lower than that of any passive learner with similar error guarantees. MARMANN is based on a generalized sample compression scheme, and a new label-efficient active model-selection procedure

    Multiclass Learning Approaches: A Theoretical Comparison with Implications

    Full text link
    We theoretically analyze and compare the following five popular multiclass classification methods: One vs. All, All Pairs, Tree-based classifiers, Error Correcting Output Codes (ECOC) with randomly generated code matrices, and Multiclass SVM. In the first four methods, the classification is based on a reduction to binary classification. We consider the case where the binary classifier comes from a class of VC dimension dd, and in particular from the class of halfspaces over Rd\reals^d. We analyze both the estimation error and the approximation error of these methods. Our analysis reveals interesting conclusions of practical relevance, regarding the success of the different approaches under various conditions. Our proof technique employs tools from VC theory to analyze the \emph{approximation error} of hypothesis classes. This is in sharp contrast to most, if not all, previous uses of VC theory, which only deal with estimation error
    corecore