325 research outputs found

    Multiphonon Raman Scattering in Graphene

    Get PDF
    We report multiphonon Raman scattering in graphene samples. Higher order combination modes involving 3 phonons and 4 phonons are observed in single-layer (SLG), bi-layer (BLG), and few layer (FLG) graphene samples prepared by mechanical exfoliation. The intensity of the higher order phonon modes (relative to the G peak) is highest in SLG and decreases with increasing layers. In addition, all higher order modes are observed to upshift in frequency almost linearly with increasing graphene layers, betraying the underlying interlayer van der Waals interactions.Comment: Accepted for publication in Phys. Rev.

    Cabibbo-suppressed non-leptonic B- and D-decays involving tensor mesons

    Get PDF
    The Cabibbo-suppressed non-leptonic decays of B (and D) mesons to final states involving tensor mesons are computed using the non-relativistic quark model of Isgur-Scora-Grinstein-Wise with the factorization hypothesis. We find that some of these B decay modes, as B --> (K^*, D^*)D^*_2, can have branching ratios as large as 6 x 10^{-5} which seems to be at the reach of future B factories.Comment: Latex, 11 pages, to appear in Phys. Rev.

    Non-leptonic B decays involving tensor mesons

    Get PDF
    Two-body non-leptonic decays of B mesons into PT and VT modes are calculated using the non-relativistic quark model of Isgur et al.. The predictions obtained for BπD2,ρD2B \to \pi D^*_2, \rho D_2^* are a factor of 353\sim 5 below present experimental upper limits. Interesting patterns are obtained for ratios of B decays involving mesons with different spin excitations and their relevance for additional tests of forms factor models are briefly discussed.Comment: 11 pages, Latex, to appear in Phys. Rev.

    Automated Domain Discovery from Multiple Sources to Improve Zero-Shot Generalization

    Full text link
    Domain generalization (DG) methods aim to develop models that generalize to settings where the test distribution is different from the training data. In this paper, we focus on the challenging problem of multi-source zero shot DG (MDG), where labeled training data from multiple source domains is available but with no access to data from the target domain. A wide range of solutions have been proposed for this problem, including the state-of-the-art multi-domain ensembling approaches. Despite these advances, the na\"ive ERM solution of pooling all source data together and training a single classifier is surprisingly effective on standard benchmarks. In this paper, we hypothesize that, it is important to elucidate the link between pre-specified domain labels and MDG performance, in order to explain this behavior. More specifically, we consider two popular classes of MDG algorithms -- distributional robust optimization (DRO) and multi-domain ensembles, in order to demonstrate how inferring custom domain groups can lead to consistent improvements over the original domain labels that come with the dataset. To this end, we propose (i) Group-DRO++, which incorporates an explicit clustering step to identify custom domains in an existing DRO technique; and (ii) DReaME, which produces effective multi-domain ensembles through implicit domain re-labeling with a novel meta-optimization algorithm. Using empirical studies on multiple standard benchmarks, we show that our variants consistently outperform ERM by significant margins (1.5% - 9%), and produce state-of-the-art MDG performance. Our code can be found at https://github.com/kowshikthopalli/DREAM
    corecore