855 research outputs found

    Curriculum CycleGAN for Textual Sentiment Domain Adaptation with Multiple Sources

    Full text link
    Sentiment analysis of user-generated reviews or comments on products and services in social networks can help enterprises to analyze the feedback from customers and take corresponding actions for improvement. To mitigate large-scale annotations on the target domain, domain adaptation (DA) provides an alternate solution by learning a transferable model from other labeled source domains. Existing multi-source domain adaptation (MDA) methods either fail to extract some discriminative features in the target domain that are related to sentiment, neglect the correlations of different sources and the distribution difference among different sub-domains even in the same source, or cannot reflect the varying optimal weighting during different training stages. In this paper, we propose a novel instance-level MDA framework, named curriculum cycle-consistent generative adversarial network (C-CycleGAN), to address the above issues. Specifically, C-CycleGAN consists of three components: (1) pre-trained text encoder which encodes textual input from different domains into a continuous representation space, (2) intermediate domain generator with curriculum instance-level adaptation which bridges the gap across source and target domains, and (3) task classifier trained on the intermediate domain for final sentiment classification. C-CycleGAN transfers source samples at instance-level to an intermediate domain that is closer to the target domain with sentiment semantics preserved and without losing discriminative features. Further, our dynamic instance-level weighting mechanisms can assign the optimal weights to different source samples in each training stage. We conduct extensive experiments on three benchmark datasets and achieve substantial gains over state-of-the-art DA approaches. Our source code is released at: https://github.com/WArushrush/Curriculum-CycleGAN.Comment: Accepted by WWW 202

    Controllable Mind Visual Diffusion Model

    Full text link
    Brain signal visualization has emerged as an active research area, serving as a critical interface between the human visual system and computer vision models. Although diffusion models have shown promise in analyzing functional magnetic resonance imaging (fMRI) data, including reconstructing high-quality images consistent with original visual stimuli, their accuracy in extracting semantic and silhouette information from brain signals remains limited. In this regard, we propose a novel approach, referred to as Controllable Mind Visual Diffusion Model (CMVDM). CMVDM extracts semantic and silhouette information from fMRI data using attribute alignment and assistant networks. Additionally, a residual block is incorporated to capture information beyond semantic and silhouette features. We then leverage a control model to fully exploit the extracted information for image synthesis, resulting in generated images that closely resemble the visual stimuli in terms of semantics and silhouette. Through extensive experimentation, we demonstrate that CMVDM outperforms existing state-of-the-art methods both qualitatively and quantitatively.Comment: 16 pages, 11 figure

    Joint optimization of product service system configuration and delivery with learning-based valid cut selection and a tailored heuristic

    Get PDF
    Most previous work on product service system configuration aims to meet the functionality need or ensure a cost-effective delivery separately, overlooking the mutual impact between the configuration and delivery procedures. In contrast to that, we jointly optimize the configuration scheme and the delivery plan to increase the customer satisfaction through a two-stage decision framework. However, this integration significantly heightens the model's complexity due to the interdependence of the two stages. To address this challenge, we introduce an exact algorithm for finding globally optimal solutions, as well as an efficient two-stage heuristic aiming at enhancing the efficiency. The exact algorithm is built upon the branch-and-bound algorithm which, however, becomes less efficient as the problem size increases. To counteract this, we devise a series of valid cuts to boost the convergence. Additionally, recognizing that the optimal bundle of valid cuts may vary depending on the specific case, we adopt artificial intelligence techniques to adaptively select valid cuts. This can lessen unnecessary search efforts when tackling new cases and further enhance the computational performance. Despite this, efficiently handling large-scale cases in real-world applications remains a challenge. To mitigate this, we customize an efficient two-stage heuristic to assure a practical applicability. In the first stage, an effective local search is used to identify an appropriate configuration scheme, which then serves as a hyperparameter for the second stage, inspired by the machine learning. The second stage focuses on optimizing the delivery plan. To obtain this plan, we dedicate a modified adaptive large neighborhood search algorithm, equipped with tailored operators and selection methods to enrich search capabilities. Furthermore, a feasibility protection procedure is specialized to rectify the infeasible solutions and secure the diversity of the solution pool simultaneously. Our numerical experiments underscore the importance of the two-stage optimization framework, demonstrate the effectiveness of adaptive valid cut selection, and highlight the superiority of our heuristic in handling complex optimization tasks

    Measurement of the top quark forward-backward production asymmetry and the anomalous chromoelectric and chromomagnetic moments in pp collisions at √s = 13 TeV

    Get PDF
    Abstract The parton-level top quark (t) forward-backward asymmetry and the anomalous chromoelectric (d̂ t) and chromomagnetic (μ̂ t) moments have been measured using LHC pp collisions at a center-of-mass energy of 13 TeV, collected in the CMS detector in a data sample corresponding to an integrated luminosity of 35.9 fb−1. The linearized variable AFB(1) is used to approximate the asymmetry. Candidate t t ¯ events decaying to a muon or electron and jets in final states with low and high Lorentz boosts are selected and reconstructed using a fit of the kinematic distributions of the decay products to those expected for t t ¯ final states. The values found for the parameters are AFB(1)=0.048−0.087+0.095(stat)−0.029+0.020(syst),μ̂t=−0.024−0.009+0.013(stat)−0.011+0.016(syst), and a limit is placed on the magnitude of | d̂ t| < 0.03 at 95% confidence level. [Figure not available: see fulltext.

    Measurement of t(t)over-bar normalised multi-differential cross sections in pp collisions at root s=13 TeV, and simultaneous determination of the strong coupling strength, top quark pole mass, and parton distribution functions

    Get PDF
    Peer reviewe

    An embedding technique to determine ττ backgrounds in proton-proton collision data

    Get PDF
    An embedding technique is presented to estimate standard model tau tau backgrounds from data with minimal simulation input. In the data, the muons are removed from reconstructed mu mu events and replaced with simulated tau leptons with the same kinematic properties. In this way, a set of hybrid events is obtained that does not rely on simulation except for the decay of the tau leptons. The challenges in describing the underlying event or the production of associated jets in the simulation are avoided. The technique described in this paper was developed for CMS. Its validation and the inherent uncertainties are also discussed. The demonstration of the performance of the technique is based on a sample of proton-proton collisions collected by CMS in 2017 at root s = 13 TeV corresponding to an integrated luminosity of 41.5 fb(-1).Peer reviewe

    Search for Physics beyond the Standard Model in Events with Overlapping Photons and Jets

    Get PDF
    Results are reported from a search for new particles that decay into a photon and two gluons, in events with jets. Novel jet substructure techniques are developed that allow photons to be identified in an environment densely populated with hadrons. The analyzed proton-proton collision data were collected by the CMS experiment at the LHC, in 2016 at root s = 13 TeV, and correspond to an integrated luminosity of 35.9 fb(-1). The spectra of total transverse hadronic energy of candidate events are examined for deviations from the standard model predictions. No statistically significant excess is observed over the expected background. The first cross section limits on new physics processes resulting in such events are set. The results are interpreted as upper limits on the rate of gluino pair production, utilizing a simplified stealth supersymmetry model. The excluded gluino masses extend up to 1.7 TeV, for a neutralino mass of 200 GeV and exceed previous mass constraints set by analyses targeting events with isolated photons.Peer reviewe
    corecore