1,863 research outputs found

    Development of a parallel trigger framework for rare decay searches

    Get PDF
    L'esperimento NA62 rappresenta il programma attuale di test del Modello Standard mediante lo studio del mesone K al CERN di Ginevra e offre un approccio complementare rispetto agli esperimenti alla frontiera delle alte energie al Large Hadron Collider. L'obiettivo dell'esperimento NA62 è misurare il rapporto di decadimento per il processo K + → π+ ν ν con una precisione del ∼10%. Essendo il valore previsto dal Modello Standard determinato con elevata precisione, la misura di questa quantità risulta essere un ottimo modo per investigare l'esistenza di nuova Fisica. In modo complementare a questo programma principale, la semplicità dei decadimenti del mesone K + (pochi canali di decadimento e bassa molteplicità nello stato finale), offre la possibilità di raggiungere ottime sensibilità nelle ricerche di decadimenti che violano la conservazione del sapore leptonico. Le caratteristiche sperimentali di decadimenti come K + → π- μ+μ+ sono molto chiare e permettono una efficace reiezione del fondo. Tuttavia, per misurare eventi di questo tipo è necessario produrre un numero considerevole di decadimenti del mesone K + . La banda in scrittura su disco rigido o nastro magnetico disponibile attualmente non consente la memorizzazione di tutti gli eventi prodotti e risulta necessaria una selezione a più stadi degli eventi potenzialmente interessanti (trigger). In NA62, una prima selezione viene effettuata in tempo reale (tempi di risposta inferiori ad 1ms) dal cosiddetto trigger di livello 0, basato su logica programmabile (FPGA), che non permette la stessa flessibilità dei processori utilizzati per i calcolatori programmabili utilizzando software. Le prestazioni delle architetture parallele come le CPU multi-core e le GPU (Graphics Processing Unit) presenti sulle schede grafiche dei calcolatori, sono promettenti per un eventuale utilizzo di queste piattaforme per il riconoscimento di patterns più elaborati come ad esempio la ricostruzione di circonferenze, dovute a luce Čerenkov, all'interno del rivelatore RICH di NA62. Nella prima parte della mia tesi ho effettuato uno studio di fattibilità sulla possibilità di utilizzare le GPU in un contesto di alta banda di eventi e bassa latenza, quale quello del trigger in tempo reale. A NA62 questo studio ha richiesto lo sviluppo di algoritmi paralleli diversi e sempre più complessi, per determinare le prestazioni e trovare i possibili colli di bottiglia di un sistema di questo tipo. Descrivo poi lo sviluppo di un framework software ad alte prestazioni, che utilizza tecniche di programmazione multithreaded e drivers di rete veloci per il trasporto delle primitive di trigger dall'elettronica di front end alla memoria della GPU per l'elaborazione e la selezione degli eventi. Infine, è descritto l'utilizzo del sistema sviluppato per la selezione di decadimenti K + → π- μ+μ+ tramite l'impiego di un algoritmo per il riconoscimento di più anelli nel rivelatore RICH. Al fine di determinare l'efficienza di selezione del decadimento ho studiato l'efficienza di reiezione del fondo e l'accettanza per gli eventi di segnale al variare di alcuni parametri di selezione, determinando i vantaggi di questo approccio innovativo

    Heterogeneous reconstruction of tracks and primary vertices with the CMS pixel tracker

    Full text link
    The High-Luminosity upgrade of the LHC will see the accelerator reach an instantaneous luminosity of 7×1034cm2s17\times 10^{34} cm^{-2}s^{-1} with an average pileup of 200200 proton-proton collisions. These conditions will pose an unprecedented challenge to the online and offline reconstruction software developed by the experiments. The computational complexity will exceed by far the expected increase in processing power for conventional CPUs, demanding an alternative approach. Industry and High-Performance Computing (HPC) centres are successfully using heterogeneous computing platforms to achieve higher throughput and better energy efficiency by matching each job to the most appropriate architecture. In this paper we will describe the results of a heterogeneous implementation of pixel tracks and vertices reconstruction chain on Graphics Processing Units (GPUs). The framework has been designed and developed to be integrated in the CMS reconstruction software, CMSSW. The speed up achieved by leveraging GPUs allows for more complex algorithms to be executed, obtaining better physics output and a higher throughput

    NaNet:a low-latency NIC enabling GPU-based, real-time low level trigger systems

    Full text link
    We implemented the NaNet FPGA-based PCI2 Gen2 GbE/APElink NIC, featuring GPUDirect RDMA capabilities and UDP protocol management offloading. NaNet is able to receive a UDP input data stream from its GbE interface and redirect it, without any intermediate buffering or CPU intervention, to the memory of a Fermi/Kepler GPU hosted on the same PCIe bus, provided that the two devices share the same upstream root complex. Synthetic benchmarks for latency and bandwidth are presented. We describe how NaNet can be employed in the prototype of the GPU-based RICH low-level trigger processor of the NA62 CERN experiment, to implement the data link between the TEL62 readout boards and the low level trigger processor. Results for the throughput and latency of the integrated system are presented and discussed.Comment: Proceedings for the 20th International Conference on Computing in High Energy and Nuclear Physics (CHEP

    Irreversible AE1 tyrosine phosphorylation leads to membrane vesiculation in G6PD deficient red cells

    Get PDF
    Background. While G6PD deficiency is one of the major causes of acute hemolytic anemia, the membrane changes leading to red cell lysis have not been extensively studied. New findings concerning the mechanisms of G6PD deficient red cell destruction may facilitate our understanding of the large individual variations in susceptibility to pro-oxidant compounds and aid the prediction of the hemolytic activity of new drugs. Methodology/Principal Findings. Our results show that treatment of G6PD deficient red cells with diamide (0.25 mM) or divicine (0.5 mM) causes: (1) an increase in the oxidation and tyrosine phosphorylation of AE1; (2) progressive recruitment of phosphorylated AE1 in large membrane complexes which also contain hemichromes; (3) parallel red cell lysis and a massive release of vesicles containing hemichromes. We have observed that inhibition of AE1 phosphorylation by Syk kinase inhibitors prevented its clustering and the membrane vesiculation while increases in AE1 phosphorylation by tyrosine phosphatase inhibitors increased both red cell lysis and vesiculation rates. In control RBCs we observed only transient AE1 phosphorylation. Conclusions/Significance. Collectively, our findings indicate that persistent tyrosine phosphorylation produces extensive membrane destabilization leading to the loss of vesicles which contain hemichromes. The proposed mechanism of hemolysis may be applied to other hemolytic diseases characterized by the accumulation of hemoglobin denaturation products

    qCLUE: a quantum clustering algorithm for multi-dimensional datasets

    Get PDF
    Clustering algorithms are at the basis of several technological applications, and are fueling the development of rapidly evolving fields such as machine learning. In the recent past, however, it has become apparent that they face challenges stemming from datasets that span more spatial dimensions. In fact, the best-performing clustering algorithms scale linearly in the number of points, but quadratically with respect to the local density of points. In this work, we introduce qCLUE, a quantum clustering algorithm that scales linearly in both the number of points and their density. qCLUE is inspired by CLUE, an algorithm developed to address the challenging time and memory budgets of Event Reconstruction (ER) in future High-Energy Physics experiments. As such, qCLUE marries decades of development with the quadratic speedup provided by quantum computers. We numerically test qCLUE in several scenarios, demonstrating its effectiveness and proving it to be a promising route to handle complex data analysis tasks – especially in high-dimensional datasets with high densities of points

    Evaluating Performance Portability with the CMS Heterogeneous Pixel Reconstruction code

    Get PDF
    In the past years the landscape of tools for expressing parallel algorithms in a portable way across various compute accelerators has continued to evolve significantly. There are many technologies on the market that provide portability between CPU, GPUs from several vendors, and in some cases even FPGAs. These technologies include C++ libraries such as Alpaka and Kokkos, compiler directives such as OpenMP, the SYCL open specification that can be implemented as a library or in a compiler, and standard C++ where the compiler is solely responsible for the offloading. Given this developing landscape, users have to choose the technology that best fits their applications and constraints. For example, in the CMS experiment the experience so far in heterogeneous reconstruction algorithms suggests that the full application contains a large number of relatively short computational kernels and memory transfer operations. In this work we use a stand-alone version of the CMS heterogeneous pixel reconstruction code as a realistic use case of HEP reconstruction software that is capable of leveraging GPUs effectively. We summarize the experience of porting this code base from CUDA to Alpaka, Kokkos, SYCL, std::par, and OpenMP offloading. We compare the event processing throughput achieved by each version on NVIDIA and AMD GPUs as well as on a CPU, and compare those to what a native version of the code achieves on each platform

    Cardiovascular Risk Perception and Knowledge among Italian Women: Lessons from IGENDA Protocol

    Get PDF
    A multicenter, cross-sectional observational study (Italian GENder Differences in Awareness of Cardiovascular risk, IGENDA study) was carried out to evaluate the perception and knowledge of cardiovascular risk among Italian women. An anonymous questionnaire was completed by 4454 women (44.3 ± 14.1 years). The 70% of respondents correctly identified cardiovascular disease (CVD) as the leading cause of death. More than half of respondents quoted cancer as the greatest current and future health problem of women of same age. Sixty percent of interviewed women considered CVD as an almost exclusively male condition. Although respondents showed a good knowledge of the major cardiovascular risk factors, the presence of cardiovascular risk factors was not associated with higher odds of identifying CVD as the biggest cause of death. Less than 10% of respondents perceived themselves as being at high CVD risk, and the increased CVD risk perception was associated with ageing, higher frequency of cardiovascular risk factors and disease, and a poorer self-rated health status. The findings of this study highlight the low perception of cardiovascular risk in Italian women and suggest an urgent need to enhance knowledge and perception of CVD risk in womenasareal health problem and not just as a as a life-threatening threat

    Risk factors associated with adverse fetal outcomes in pregnancies affected by Coronavirus disease 2019 (COVID-19): a secondary analysis of the WAPM study on COVID-19.

    Get PDF
    Objectives To evaluate the strength of association between maternal and pregnancy characteristics and the risk of adverse perinatal outcomes in pregnancies with laboratory confirmed COVID-19. Methods Secondary analysis of a multinational, cohort study on all consecutive pregnant women with laboratory-confirmed COVID-19 from February 1, 2020 to April 30, 2020 from 73 centers from 22 different countries. A confirmed case of COVID-19 was defined as a positive result on real-time reverse-transcriptase-polymerase-chain-reaction (RT-PCR) assay of nasal and pharyngeal swab specimens. The primary outcome was a composite adverse fetal outcome, defined as the presence of either abortion (pregnancy loss before 22 weeks of gestations), stillbirth (intrauterine fetal death after 22 weeks of gestation), neonatal death (death of a live-born infant within the first 28 days of life), and perinatal death (either stillbirth or neonatal death). Logistic regression analysis was performed to evaluate parameters independently associated with the primary outcome. Logistic regression was reported as odds ratio (OR) with 95% confidence interval (CI). Results Mean gestational age at diagnosis was 30.6+/-9.5 weeks, with 8.0% of women being diagnosed in the first, 22.2% in the second and 69.8% in the third trimester of pregnancy. There were six miscarriage (2.3%), six intrauterine device (IUD) (2.3) and 5 (2.0%) neonatal deaths, with an overall rate of perinatal death of 4.2% (11/265), thus resulting into 17 cases experiencing and 226 not experiencing composite adverse fetal outcome. Neither stillbirths nor neonatal deaths had congenital anomalies found at antenatal or postnatal evaluation. Furthermore, none of the cases experiencing IUD had signs of impending demise at arterial or venous Doppler. Neonatal deaths were all considered as prematurity-related adverse events. Of the 250 live-born neonates, one (0.4%) was found positive at RT-PCR pharyngeal swabs performed after delivery. The mother was tested positive during the third trimester of pregnancy. The newborn was asymptomatic and had negative RT-PCR test after 14 days of life. At logistic regression analysis, gestational age at diagnosis (OR: 0.85, 95% CI 0.8-0.9 per week increase; pPeer reviewe

    Optimasi Portofolio Resiko Menggunakan Model Markowitz MVO Dikaitkan dengan Keterbatasan Manusia dalam Memprediksi Masa Depan dalam Perspektif Al-Qur`an

    Full text link
    Risk portfolio on modern finance has become increasingly technical, requiring the use of sophisticated mathematical tools in both research and practice. Since companies cannot insure themselves completely against risk, as human incompetence in predicting the future precisely that written in Al-Quran surah Luqman verse 34, they have to manage it to yield an optimal portfolio. The objective here is to minimize the variance among all portfolios, or alternatively, to maximize expected return among all portfolios that has at least a certain expected return. Furthermore, this study focuses on optimizing risk portfolio so called Markowitz MVO (Mean-Variance Optimization). Some theoretical frameworks for analysis are arithmetic mean, geometric mean, variance, covariance, linear programming, and quadratic programming. Moreover, finding a minimum variance portfolio produces a convex quadratic programming, that is minimizing the objective function ðð¥with constraintsð ð 𥠥 ðandð´ð¥ = ð. The outcome of this research is the solution of optimal risk portofolio in some investments that could be finished smoothly using MATLAB R2007b software together with its graphic analysis

    Search for heavy resonances decaying to two Higgs bosons in final states containing four b quarks