1,866 research outputs found

    B_d mixing and prospects for B_s mixing at D0

    Full text link
    Measurement of the B_s oscillation frequency via B_s mixing analyses provides a powerful constraint on the CKM matrix elements. The study of B_d oscillations is an important step towards B_s mixing and a preliminary measurement of Delta m_d has been made with ~250 pb^-1 of data collected with the upgraded Run II D0 detector. Different flavor tagging algorithms have been developed and are being optimized for use on a large set of B_s mesons that have been reconstructed in different semileptonic decay modes.Comment: Poster presented at the XXIV Physics in Collisions Conference (PIC04), Boston, USA, June 2004, 4 pages, LaTeX, 2 eps figure

    Search for top partners with charge 5e/3

    Full text link
    A feasibility study of searches for top partners with charge 5e/3 at the upgraded Large Hadron Collider is performed. The discovery potential and exclusion limits are presented using integrated luminosities of 300 fb−1^{-1} and 3000 fb−1^{-1} at center-of-mass energies of 14 and 33 TeV

    Domain Classification-based Source-specific Term Penalization for Domain Adaptation in Hate-speech Detection

    Get PDF
    State-of-the-art approaches for hate-speech detection usually exhibit poor performance in out-of-domain settings. This occurs, typically, due to classifiers overemphasizing source-specific information that negatively impacts its domain invariance. Prior work has attempted to penalize terms related to hate-speech from manually curated lists using feature attribution methods, which quantify the importance assigned to input terms by the classifier when making a prediction. We, instead, propose a domain adaptation approach that automatically extracts and penalizes source-specific terms using a domain classifier, which learns to differentiate between domains, and feature-attribution scores for hate-speech classes, yielding consistent improvements in cross-domain evaluation.Comment: COLING 2022 pre-prin

    Unsupervised Domain Adaptation in Cross-corpora Abusive Language Detection

    Get PDF
    International audienceThe state-of-the-art abusive language detection models report great in-corpus performance, but underperform when evaluated on abusive comments that differ from the training scenario. As human annotation involves substantial time and effort, models that can adapt to newly collected comments can prove to be useful. In this paper, we investigate the effectiveness of several Unsupervised Domain Adaptation (UDA) approaches for the task of cross-corpora abusive language detection. In comparison, we adapt a variant of the BERT model, trained on large-scale abusive comments, using Masked Language Model (MLM) fine-tuning. Our evaluation shows that the UDA approaches result in sub-optimal performance, while the MLM fine-tuning does better in the cross-corpora setting. Detailed analysis reveals the limitations of the UDA approaches and emphasizes the need to build efficient adaptation methods for this task

    Generalisability of Topic Models in Cross-corpora Abusive Language Detection

    Get PDF
    International audienceRapidly changing social media content calls for robust and generalisable abuse detection models. However, the state-of-the-art supervised models display degraded performance when they are evaluated on abusive comments that differ from the training corpus. We investigate if the performance of supervised models for cross-corpora abuse detection can be improved by incorporating additional information from topic models, as the latter can infer the latent topic mixtures from unseen samples. In particular, we combine topical information with representations from a model tuned for classifying abusive comments. Our performance analysis reveals that topic models are able to capture abuse-related topics that can transfer across corpora, and result in improved generalisability

    Transferring Knowledge via Neighborhood-Aware Optimal Transport for Low-Resource Hate Speech Detection

    Get PDF
    International audienceThe concerning rise of hateful content on online platforms has increased the attention towards automatic hate speech detection, commonly formulated as a supervised classification task. State-of-the-art deep learning-based approaches usually require a substantial amount of labeled resources for training. However, annotating hate speech resources is expensive, time-consuming, and often harmful to the annotators. This creates a pressing need to transfer knowledge from the existing labeled resources to low-resource hate speech corpora with the goal of improving system performance. For this, neighborhood-based frameworks have been shown to be effective. However, they have limited flexibility. In our paper, we propose a novel training strategy that allows flexible modeling of the relative proximity of neighbors retrieved from a resource-rich corpus to learn the amount of transfer. In particular, we incorporate neighborhood information with Optimal Transport, which permits exploiting the geometry of the data embedding space. By aligning the joint embedding and label distributions of neighbors, we demonstrate substantial improvements over strong baselines, in low-resource scenarios, on different publicly available hate speech corpora

    Domain Classification-based Source-specific Term Penalization for Domain Adaptation in Hate-speech Detection

    Get PDF
    International audienceState-of-the-art approaches for hate-speech detection usually exhibit poor performance in out-of-domain settings. This occurs, typically, due to classifiers overemphasizing source-specific information that negatively impacts its domain invariance. Prior work has attempted to penalize terms related to hate-speech from manually curated lists using feature attribution methods, which quantify the importance assigned to input terms by the classifier when making a prediction. We, instead, propose a domain adaptation approach that automatically extracts and penalizes source-specific terms using a domain classifier, which learns to differentiate between domains, and feature-attribution scores for hate-speech classes, yielding consistent improvements in cross-domain evaluation

    Dynamically Refined Regularization for Improving Cross-corpora Hate Speech Detection

    Get PDF
    International audienceHate speech classifiers exhibit substantial performance degradation when evaluated on datasets different from the source. This is due to learning spurious correlations between words that are not necessarily relevant to hateful language, and hate speech labels from the training corpus. Previous work has attempted to mitigate this problem by regularizing specific terms from pre-defined static dictionaries. While this has been demonstrated to improve the generalizability of classifiers, the coverage of such methods is limited and the dictionaries require regular manual updates from human experts. In this paper, we propose to automatically identify and reduce spurious correlations using attribution methods with dynamic refinement of the list of terms that need to be regularized during training. Our approach is flexible and improves the cross-corpora performance over previous work independently and in combination with pre-defined dictionaries

    Design, Performance, and Calibration of the CMS Hadron-Outer Calorimeter

    Get PDF
    The CMS hadron calorimeter is a sampling calorimeter with brass absorber and plastic scintillator tiles with wavelength shifting fibres for carrying the light to the readout device. The barrel hadron calorimeter is complemented with an outer calorimeter to ensure high energy shower containment in the calorimeter. Fabrication, testing and calibration of the outer hadron calorimeter are carried out keeping in mind its importance in the energy measurement of jets in view of linearity and resolution. It will provide a net improvement in missing \et measurements at LHC energies. The outer hadron calorimeter will also be used for the muon trigger in coincidence with other muon chambers in CMS
    • 

    corecore