50 research outputs found

    An embedding technique to determine ττ backgrounds in proton-proton collision data

    Get PDF
    An embedding technique is presented to estimate standard model tau tau backgrounds from data with minimal simulation input. In the data, the muons are removed from reconstructed mu mu events and replaced with simulated tau leptons with the same kinematic properties. In this way, a set of hybrid events is obtained that does not rely on simulation except for the decay of the tau leptons. The challenges in describing the underlying event or the production of associated jets in the simulation are avoided. The technique described in this paper was developed for CMS. Its validation and the inherent uncertainties are also discussed. The demonstration of the performance of the technique is based on a sample of proton-proton collisions collected by CMS in 2017 at root s = 13 TeV corresponding to an integrated luminosity of 41.5 fb(-1).Peer reviewe

    Performance of missing transverse momentum reconstruction in proton-proton collisions at root s=13 TeV using the CMS detector

    Get PDF
    The performance of missing transverse momentum ((p) over right arrow (miss)(T)) reconstruction algorithms for the CMS experiment is presented, using proton-proton collisions at a center-of-mass energy of 13 TeV, collected at the CERN LHC in 2016. The data sample corresponds to an integrated luminosity of 35.9 fb(-1). The results include measurements of the scale and resolution of (p) over right arrow (miss)(T), and detailed studies of events identified with anomalous (p) over right arrow (miss)(T). The performance is presented of a (p) over right arrow (miss)(T) reconstruction algorithm that mitigates the effects of multiple proton-proton interactions, using the "pileup per particle identification" method. The performance is shown of an algorithm used to estimate the compatibility of the reconstructed (p) over right arrow (miss)(T) with the hypothesis that it originates from resolution effects.Peer reviewe

    Multi-resolution shadow mapping using CUDA rasterizer

    No full text
    Shadow mapping is a fast and easy to use method to produce hard shadows. However, it introduces aliasing due to its uniform sampling strategy and limited shadow map resolution. In this paper, we propose a memory efficient algorithm to render high quality shadows. Our algorithm is based on a multi-resolution shadow map structure, which includes a conventional shadow map for scene regions where a low-resolution shadow map is sufficient, and a high-resolution patch buffer to capture scene regions that are susceptible to aliasing. With this data structure, we are able to capture shadow details with far less memory footprint than conventional shadow mapping. In order to maintain an appropriate performance compared to conventional shadow mapping, we designed a customized CUDA rasterizer to render the high-resolution patches. © 2013 IEEE.Shadow mapping is a fast and easy to use method to produce hard shadows. However, it introduces aliasing due to its uniform sampling strategy and limited shadow map resolution. In this paper, we propose a memory efficient algorithm to render high quality shadows. Our algorithm is based on a multi-resolution shadow map structure, which includes a conventional shadow map for scene regions where a low-resolution shadow map is sufficient, and a high-resolution patch buffer to capture scene regions that are susceptible to aliasing. With this data structure, we are able to capture shadow details with far less memory footprint than conventional shadow mapping. In order to maintain an appropriate performance compared to conventional shadow mapping, we designed a customized CUDA rasterizer to render the high-resolution patches. © 2013 IEEE

    Synthesis of SrFexTi1-xO3-delta nanocubes with tunable oxygen vacancies for selective and efficient photocatalytic NO oxidation

    No full text
    Oxygen vacancies of metal oxides play critical roles in tunning activity and selectivity for many photocatalysis mediated reactions, yet the mechanism of NO oxidation on defect enriched photocatalyst surface is seldomly discussed. Herein, we provide detailed insight into the relationship between oxygen vacancy manipulation by extrinsic Fe3+ substitution in SrTiO3 host lattice and the photocatalytic performance of NO abatement. In particular, the hydrothermal synthesized SrFexTi1-xO3-δ nanocubes (denoted as SFTO-hyd sample) rather than the impregnated-post annealing sample, enabled oxygen vacancy formation, and promoted O2 adsorption and superoxide anion radicals (O2−) formation. The SFTO-hyd (x = 5%) sample showed remarkably higher NO removal activity and selectivity under Xe lamp (λ > 420 nm), in comparison with the pristine SrTiO3, P25 and impregnation-doped SFTO sample, underlining the important roles played by coexisted Fe3+ sites and oxygen vacancies. The in situ diffuse reflectance IR spectroscopy (DRIFTS) mechanically revealed that SrTiO3 provided Lewis acidic sites for NO dark adsorption and photoreaction with nitrates as final products; the substitutional Fe3+ sites provided more active sites for NO adsorption and photoreaction with enhanced number of radicals. This study deepens the understanding of photocatalytic NO abatement on defective surface, and may also provide a simple and cost effective strategy for synthesizing efficient and selective photocatalysts for environmental remediation

    Mining streams of short text for analysis of world-wide event evolutions

    No full text
    Streams of short text, such as news titles, enable us to effectively and efficiently learn the real world events that occur anywhere and anytime. Short text messages that are companied by timestamps and generally brief events using only a few words differ from other longer text documents, such as web pages, news stories, blogs, technical papers and books. For example, few words repeat in the same news titles, thus frequency of the term (i.e., TF) is not as important in short text corpus as in longer text corpus. Therefore, analysis of short text faces new challenges. Also, detecting and tracking events through short text analysis need to reliably identify events from constant topic clusters; however, existing methods, such as Latent Dirichlet Allocation (LDA), generates different topic results for a corpus at different executions. In this paper, we provide a Finding Topic Clusters using Co-occurring Terms (FTCCT) algorithm to automatically generate topics from a short text corpus, and develop an Event Evolution Mining (EEM) algorithm to discover hot events and their evolutions (i.e., the popularity degrees of events changing over time). In FTCCT, a term (i.e., a single word or a multiple-words phrase) belongs to only one topic in a corpus. Experiments on news titles of 157 countries within 4 months (from July to October, 2013) demonstrate that our FTCCT-based method (combining FTCCT and EEM) achieves far higher quality of the event's content and description words than LDA-based method (combining LDA and EEM) for analysis of streams of short text. Our method also visualizes the evolutions of the hot events. The discovered world-wide event evolutions have explored some interesting correlations of the world-wide events; for example, successive extreme weather phenomenon occur in different locations - typhoon in Hong Kong and Philippines followed hurricane and storm flood in Mexico in September 2013. © 2014 Springer Science+Business Media New York.Streams of short text, such as news titles, enable us to effectively and efficiently learn the real world events that occur anywhere and anytime. Short text messages that are companied by timestamps and generally brief events using only a few words differ from other longer text documents, such as web pages, news stories, blogs, technical papers and books. For example, few words repeat in the same news titles, thus frequency of the term (i.e., TF) is not as important in short text corpus as in longer text corpus. Therefore, analysis of short text faces new challenges. Also, detecting and tracking events through short text analysis need to reliably identify events from constant topic clusters; however, existing methods, such as Latent Dirichlet Allocation (LDA), generates different topic results for a corpus at different executions. In this paper, we provide a Finding Topic Clusters using Co-occurring Terms (FTCCT) algorithm to automatically generate topics from a short text corpus, and develop an Event Evolution Mining (EEM) algorithm to discover hot events and their evolutions (i.e., the popularity degrees of events changing over time). In FTCCT, a term (i.e., a single word or a multiple-words phrase) belongs to only one topic in a corpus. Experiments on news titles of 157 countries within 4 months (from July to October, 2013) demonstrate that our FTCCT-based method (combining FTCCT and EEM) achieves far higher quality of the event's content and description words than LDA-based method (combining LDA and EEM) for analysis of streams of short text. Our method also visualizes the evolutions of the hot events. The discovered world-wide event evolutions have explored some interesting correlations of the world-wide events; for example, successive extreme weather phenomenon occur in different locations - typhoon in Hong Kong and Philippines followed hurricane and storm flood in Mexico in September 2013. © 2014 Springer Science+Business Media New York

    Interplay between the Westerlies and Asian monsoon recorded in Lake Qinghai sediments since 32 ka

    No full text
    Two atmospheric circulation systems, the mid-latitude Westerlies and the Asian summer monsoon (ASM), play key roles in northern-hemisphere climatic changes. However, the variability of the Westerlies in Asia and their relationship to the ASM remain unclear. Here, we present the longest and highest-resolution drill core from Lake Qinghai on the northeastern Tibetan Plateau (TP), which uniquely records the variability of both the Westerlies and the ASM since 32 ka, reflecting the interplay of these two systems. These records document the anti-phase relationship of the Westerlies and the ASM for both glacial-interglacial and glacial millennial timescales. During the last glaciation, the influence of the Westerlies dominated; prominent dust-rich intervals, correlated with Heinrich events, reflect intensified Westerlies linked to northern high-latitude climate. During the Holocene, the dominant ASM circulation, punctuated by weak events, indicates linkages of the ASM to orbital forcing, North Atlantic abrupt events, and perhaps solar activity changes.</p

    Categorize Radio Interference using component and temporal analysis

    Full text link
    Radio frequency interference (RFI) is a significant challenge faced by today's radio astronomers. While most past efforts were devoted to cleaning the RFI from the data, we develop a novel method for categorizing and cataloguing RFI for forensic purpose. We present a classifier that categorizes RFI into different types based on features extracted using Principal Component Analysis (PCA) and Fourier analysis. The classifier can identify narrowband non-periodic RFI above 2 sigma, narrowband periodic RFI above 3 sigma, and wideband impulsive RFI above 5 sigma with F1 scores between 0.87 and 0.91 in simulation. This classifier could be used to identify the sources of RFI as well as to clean RFI contamination (particularly in pulsar search). In the long-term analysis of the categorized RFI, we found a special type of drifting periodic RFI that is detrimental to pulsar search. We also found evidences of an increased rate of impulsive RFI when the telescope is pointing toward the cities. These results demonstrate this classifier's potential as a forensic tool for RFI environment monitoring of radio telescopes.Comment: 15 pages, 19 figure
    corecore