667 research outputs found

    A Unifying View of Multiple Kernel Learning

    Full text link
    Recent research on multiple kernel learning has lead to a number of approaches for combining kernels in regularized risk minimization. The proposed approaches include different formulations of objectives and varying regularization strategies. In this paper we present a unifying general optimization criterion for multiple kernel learning and show how existing formulations are subsumed as special cases. We also derive the criterion's dual representation, which is suitable for general smooth optimization algorithms. Finally, we evaluate multiple kernel learning in this framework analytically using a Rademacher complexity bound on the generalization error and empirically in a set of experiments

    Finite-Temperature Transport in Finite-Size Hubbard Rings in the Strong-Coupling Limit

    Full text link
    We study the current, the curvature of levels, and the finite temperature charge stiffness, D(T,L), in the strongly correlated limit, U>>t, for Hubbard rings of L sites, with U the on-site Coulomb repulsion and t the hopping integral. Our study is done for finite-size systems and any band filling. Up to order t we derive our results following two independent approaches, namely, using the solution provided by the Bethe ansatz and the solution provided by an algebraic method, where the electronic operators are represented in a slave-fermion picture. We find that, in the U=\infty case, the finite-temperature charge stiffness is finite for electronic densities, n, smaller than one. These results are essencially those of spinless fermions in a lattice of size L, apart from small corrections coming from a statistical flux, due to the spin degrees of freedom. Up to order t, the Mott-Hubbard gap is \Delta_{MH}=U-4t, and we find that D(T) is finite for n<1, but is zero at half-filling. This result comes from the effective flux felt by the holon excitations, which, due to the presence of doubly occupied sites, is renormalized to \Phi^{eff}=\phi(N_h-N_d)/(N_d+N_h), and which is zero at half-filling, with N_d and N_h being the number of doubly occupied and empty lattice sites, respectively. Further, for half-filling, the current transported by any eigenstate of the system is zero and, therefore, D(T) is also zero.Comment: 15 pages and 6 figures; accepted for PR

    Inelastic Rescattering and CP Asymmetries in D -> pi+ pi-, pi0 pi0

    Full text link
    We study the direct CP violation induced by inelastic final state interaction (FSI) rescattering in D→ππD\to\pi\pi modes, and find that the resultant CP asymmetry is about 10−410^{-4} which is larger than Ï”â€Č\epsilon' in the K-system. Our estimation is based on well-established theories and experiment measured data, so there are almost no free parameters except the weak phase ÎŽ13\delta_{13} in the CKM matrix.Comment: 9 page

    Search for a W' boson decaying to a bottom quark and a top quark in pp collisions at sqrt(s) = 7 TeV

    Get PDF
    Results are presented from a search for a W' boson using a dataset corresponding to 5.0 inverse femtobarns of integrated luminosity collected during 2011 by the CMS experiment at the LHC in pp collisions at sqrt(s)=7 TeV. The W' boson is modeled as a heavy W boson, but different scenarios for the couplings to fermions are considered, involving both left-handed and right-handed chiral projections of the fermions, as well as an arbitrary mixture of the two. The search is performed in the decay channel W' to t b, leading to a final state signature with a single lepton (e, mu), missing transverse energy, and jets, at least one of which is tagged as a b-jet. A W' boson that couples to fermions with the same coupling constant as the W, but to the right-handed rather than left-handed chiral projections, is excluded for masses below 1.85 TeV at the 95% confidence level. For the first time using LHC data, constraints on the W' gauge coupling for a set of left- and right-handed coupling combinations have been placed. These results represent a significant improvement over previously published limits.Comment: Submitted to Physics Letters B. Replaced with version publishe

    Search for the standard model Higgs boson decaying into two photons in pp collisions at sqrt(s)=7 TeV

    Get PDF
    A search for a Higgs boson decaying into two photons is described. The analysis is performed using a dataset recorded by the CMS experiment at the LHC from pp collisions at a centre-of-mass energy of 7 TeV, which corresponds to an integrated luminosity of 4.8 inverse femtobarns. Limits are set on the cross section of the standard model Higgs boson decaying to two photons. The expected exclusion limit at 95% confidence level is between 1.4 and 2.4 times the standard model cross section in the mass range between 110 and 150 GeV. The analysis of the data excludes, at 95% confidence level, the standard model Higgs boson decaying into two photons in the mass range 128 to 132 GeV. The largest excess of events above the expected standard model background is observed for a Higgs boson mass hypothesis of 124 GeV with a local significance of 3.1 sigma. The global significance of observing an excess with a local significance greater than 3.1 sigma anywhere in the search range 110-150 GeV is estimated to be 1.8 sigma. More data are required to ascertain the origin of this excess.Comment: Submitted to Physics Letters

    A competitive scheme for storing sparse representation of X-Ray medical images

    Get PDF
    A competitive scheme for economic storage of the informational content of an X-Ray image, as it can be used for further processing, is presented. It is demonstrated that sparse representation of that type of data can be encapsulated in a small file without affecting the quality of the recovered image. The proposed representation, which is inscribed within the context of data reduction, provides a format for saving the image information in a way that could assist methodologies for analysis and classification. The competitiveness of the resulting file is compared against the compression standards JPEG and JPEG200

    Measurement of the Lambda(b) cross section and the anti-Lambda(b) to Lambda(b) ratio with Lambda(b) to J/Psi Lambda decays in pp collisions at sqrt(s) = 7 TeV

    Get PDF
    The Lambda(b) differential production cross section and the cross section ratio anti-Lambda(b)/Lambda(b) are measured as functions of transverse momentum pt(Lambda(b)) and rapidity abs(y(Lambda(b))) in pp collisions at sqrt(s) = 7 TeV using data collected by the CMS experiment at the LHC. The measurements are based on Lambda(b) decays reconstructed in the exclusive final state J/Psi Lambda, with the subsequent decays J/Psi to an opposite-sign muon pair and Lambda to proton pion, using a data sample corresponding to an integrated luminosity of 1.9 inverse femtobarns. The product of the cross section times the branching ratio for Lambda(b) to J/Psi Lambda versus pt(Lambda(b)) falls faster than that of b mesons. The measured value of the cross section times the branching ratio for pt(Lambda(b)) > 10 GeV and abs(y(Lambda(b))) < 2.0 is 1.06 +/- 0.06 +/- 0.12 nb, and the integrated cross section ratio for anti-Lambda(b)/Lambda(b) is 1.02 +/- 0.07 +/- 0.09, where the uncertainties are statistical and systematic, respectively.Comment: Submitted to Physics Letters

    Search for new physics in events with opposite-sign leptons, jets, and missing transverse energy in pp collisions at sqrt(s) = 7 TeV

    Get PDF
    A search is presented for physics beyond the standard model (BSM) in final states with a pair of opposite-sign isolated leptons accompanied by jets and missing transverse energy. The search uses LHC data recorded at a center-of-mass energy sqrt(s) = 7 TeV with the CMS detector, corresponding to an integrated luminosity of approximately 5 inverse femtobarns. Two complementary search strategies are employed. The first probes models with a specific dilepton production mechanism that leads to a characteristic kinematic edge in the dilepton mass distribution. The second strategy probes models of dilepton production with heavy, colored objects that decay to final states including invisible particles, leading to very large hadronic activity and missing transverse energy. No evidence for an event yield in excess of the standard model expectations is found. Upper limits on the BSM contributions to the signal regions are deduced from the results, which are used to exclude a region of the parameter space of the constrained minimal supersymmetric extension of the standard model. Additional information related to detector efficiencies and response is provided to allow testing specific models of BSM physics not considered in this paper.Comment: Replaced with published version. Added journal reference and DO

    Measurement of isolated photon production in pp and PbPb collisions at sqrt(sNN) = 2.76 TeV

    Get PDF
    Isolated photon production is measured in proton-proton and lead-lead collisions at nucleon-nucleon centre-of-mass energies of 2.76 TeV in the pseudorapidity range |eta|<1.44 and transverse energies ET between 20 and 80 GeV with the CMS detector at the LHC. The measured ET spectra are found to be in good agreement with next-to-leading-order perturbative QCD predictions. The ratio of PbPb to pp isolated photon ET-differential yields, scaled by the number of incoherent nucleon-nucleon collisions, is consistent with unity for all PbPb reaction centralities.Comment: Submitted to Physics Letters

    Direct Learning of Sparse Changes in Markov Networks by Density Ratio Estimation

    Get PDF
    Abstract. We propose a new method for detecting changes in Markov network structure between two sets of samples. Instead of naively fitting two Markov network models separately to the two data sets and figuring out their difference, we directly learn the network structure change by estimating the ratio of Markov network models. This density-ratio formulation naturally allows us to introduce sparsity in the network structure change, which highly contributes to enhancing interpretability. Furthermore, computation of the normalization term, which is a critical computational bottleneck of the naive approach, can be remarkably mitigated. Through experiments on gene expression and Twitter data analysis, we demonstrate the usefulness of our method.
    • 

    corecore