76 research outputs found

    The JEM-EUSO time synchronization system

    Get PDF
    Abstract JEM-EUSO is a wide-angle refractive UV-telescope proposed to be attached on International Space Station. The tracks generated by Extensive Air Showers (EAS) produced by Ultra High Energy Cosmic Rays (UHECR) are reconstructed registering the data coming from 4932 MAPMTs of 64-pixels and retrieving the interesting ones on the occurrence of second level triggers. To guarantee correct time alignment of the events and to measure the event time with a precision of few microseconds, a time synchronization system for the focal surface electronics has been developed. Here we will present the status and the technical solutions adopted so far

    Upgrade of the TOTEM DAQ using the Scalable Readout System (SRS)

    Get PDF
    The main goals of the TOTEM Experiment at the LHC are the measurements of the elastic and total p-p cross sections and the studies of the diffractive dissociation processes. At LHC, collisions are produced at a rate of 40 MHz, imposing strong requirements for the Data Acquisition Systems (DAQ) in terms of trigger rate and data throughput. The TOTEM DAQ adopts a modular approach that, in standalone mode, is based on VME bus system. The VME based Front End Driver (FED) modules, host mezzanines that receive data through optical fibres directly from the detectors. After data checks and formatting are applied in the mezzanine, data is retransmitted to the VME interface and to another mezzanine card plugged in the FED module. The VME bus maximum bandwidth limits the maximum first level trigger (L1A) to 1 kHz rate. In order to get rid of the VME bottleneck and improve scalability and the overall capabilities of the DAQ, a new system was designed and constructed based on the Scalable Readout System (SRS), developed in the framework of the RD51 Collaboration. The project aims to increase the efficiency of the actual readout system providing higher bandwidth, and increasing data filtering, implementing a second-level trigger event selection based on hardware pattern recognition algorithms. This goal is to be achieved preserving the maximum back compatibility with the LHC Timing, Trigger and Control (TTC) system as well as with the CMS DAQ. The obtained results and the perspectives of the project are reported. In particular, we describe the system architecture and the new Opto-FEC adapter card developed to connect the SRS with the FED mezzanine modules. A first test bench was built and validated during the last TOTEM data taking period (February 2013). Readout of a set of 3 TOTEM Roman Pot silicon detectors was carried out to verify performance in the real LHC environment. In addition, the test allowed a check of data consistency and quality

    Observation of proton-tagged, central (semi)exclusive production of high-mass lepton pairs in pp collisions at 13 TeV with the CMS-TOTEM precision proton spectrometer

    Get PDF
    The process pp -> pl(+)l(-)p(()*()), with l(+)l(-) a muon or an electron pair produced at midrapidity with mass larger than 110 GeV, has been observed for the first time at the LHC in pp collisions at root s = 13 TeV. One of the two scattered protons is measured in the CMS-TOTEM precision proton spectrometer (CT-PPS), which operated for the first time in 2016. The second proton either remains intact or is excited and then dissociates into a low-mass state p*, which is undetected. The measurement is based on an integrated luminosity of 9.4 fb(-1) collected during standard, high-luminosity LHC operation. A total of 12 mu(+)/mu(-) and 8 e(+)e(-) pairs with m(l(+)l(-)) > 110 GeV, and matching forward proton kinematics, are observed, with expected backgrounds of 1.49 +/- 0.07 (stat) +/- 0.53 (syst) and 2.36 +/- 0.09 (stat) +/- 0.47(syst), respectively. This corresponds to an excess of more than five standard deviations over the expected background. The present result constitutes the first observation of proton-tagged gamma gamma collisions at the electroweak scale. This measurement also demonstrates that CT-PPS performs according to the design specifications.Peer reviewe

    Measurement of single-diffractive dijet production in proton-proton collisions at root s=8 TeV with the CMS and TOTEM experiments

    Get PDF
    A Publisher's Erratum to this article was published on 03 May 2021. https://doi.org/10.1140/epjc/s10052-021-08863-wPeer reviewe

    Measurement of single-diffractive dijet production in proton–proton collisions at s=8TeV\sqrt{s} = 8\,\text {Te}\text {V} with the CMS and TOTEM experiments

    Get PDF
    Measurements are presented of the single-diffractive dijet cross section and the diffractive cross section as a function of the proton fractional momentum loss ξ ξ and the four-momentum transfer squared t. Both processes p p → p X p p → p X and p p → X p p p → X p , i.e. with the proton scattering to either side of the interaction point, are measured, where X X includes at least two jets; the results of the two processes are averaged. The analyses are based on data collected simultaneously with the CMS and TOTEM detectors at the LHC in proton–proton collisions at s √ =8TeV s=8TeV during a dedicated run with β ∗ =90m β∗=90m at low instantaneous luminosity and correspond to an integrated luminosity of 37.5nb −1 37.5nb−1 . The single-diffractive dijet cross section σ p X jj σjj p X , in the kinematic region ξ<0.1 ξ<0.1 , 0.03<|t|<1GeV 2 0.03<|t|<1GeV2 , with at least two jets with transverse momentum p T >40GeV pT>40GeV , and pseudorapidity |η|<4.4 |η|<4.4 , is 21.7±0.9(stat) +3.0 −3.3 (syst)±0.9(lumi)nb 21.7±0.9(stat)−3.3+3.0(syst)±0.9(lumi)nb . The ratio of the single-diffractive to inclusive dijet yields, normalised per unit of ξ ξ , is presented as a function of x, the longitudinal momentum fraction of the proton carried by the struck parton. The ratio in the kinematic region defined above, for x values in the range −2.9≤log 10 x≤−1.6 −2.9≤log10⁡x≤−1.6 , is R=(σ p X jj /Δξ)/σ jj =0.025±0.001(stat)±0.003(syst) R=(σjj p X /Δξ)/σjj=0.025±0.001(stat)±0.003(syst) , where σ p X jj σjj p X and σ jj σjj are the single-diffractive and inclusive dijet cross sections, respectively. The results are compared with predictions from models of diffractive and nondiffractive interactions. Monte Carlo predictions based on the HERA diffractive parton distribution functions agree well with the data when corrected for the effect of soft rescattering between the spectator partons

    Erratum to: Measurement of single-diffractive dijet production in proton–proton collisions at s=8TeV\sqrt{s} = 8\,\text {Te}\text {V} with the CMS and TOTEM experiments

    Get PDF

    EVALITA Evaluation of NLP and Speech Tools for Italian - December 17th, 2020

    Get PDF
    Welcome to EVALITA 2020! EVALITA is the evaluation campaign of Natural Language Processing and Speech Tools for Italian. EVALITA is an initiative of the Italian Association for Computational Linguistics (AILC, http://www.ai-lc.it) and it is endorsed by the Italian Association for Artificial Intelligence (AIxIA, http://www.aixia.it) and the Italian Association for Speech Sciences (AISV, http://www.aisv.it)

    Disjoint interval partitioning

    Full text link
    In databases with time interval attributes, query processing techniques that are based on sort-merge or sort-aggregate deteriorate. This happens because for intervals no total order exists and either the start or end point is used for the sorting. Doing so leads to inefficient solutions with lots of unproductive comparisons that do not produce an output tuple. Even if just one tuple with a long interval is present in the data, the number of unproductive comparisons of sort-merge and sort-aggregate gets quadratic. In this paper we propose disjoint interval partitioning (DIP\mathcal {DIP}), a technique to efficiently perform sort-based operators on interval data. DIP\mathcal {DIP} divides an input relation into the minimum number of partitions, such that all tuples in a partition are non-overlapping. The absence of overlapping tuples guarantees efficient sort-merge computations without backtracking. With DIP\mathcal {DIP} the number of unproductive comparisons is linear in the number of partitions. In contrast to current solutions with inefficient random accesses to the active tuples, DIP\mathcal {DIP} fetches the tuples in a partition sequentially. We illustrate the generality and efficiency of DIP\mathcal {DIP} by describing and evaluating three basic database operators over interval data: join, anti-join and aggregation

    Leveraging sort-merge for processing temporal data

    Full text link
    Sorting is, together with partitioning and indexing, one of the core paradigms on which current Database Management System implementations base their query processing. It can be applied to efficiently compute joins, anti-joins, nearest neighbour joins (NNJs), aggregations, etc. It is efficient since, after the sorting, it makes one sequential scan of both inputs, and does not fetch redundantly tuples that do not appear in the result. However, sort-based approaches loose their efficiency in the presence of temporal data: i) when dealing with time intervals, backtracking to previously scanned tuples that are still valid refetches in vain also tuples that are not anymore valid and will not appear in the result; ii) when dealing with timestamps, in computing NNJs with grouping attributes, blocks storing tuples of different groups are refetched multiple times. The goal of this thesis is to provide support to database systems for performing efficient sort-merge computations in the above cases. We first introduce a new operator for computing NNJ queries with integrated support of grouping attributes and selection predicates. Its evaluation tree avoids false hits and redundant fetches, which are major performance bottlenecks in current NNJ solutions. We then show that, in contrast to current solutions that are not group- and selection-enabled, our approach does not constrain the scope of the query optimizer: query trees using our solution can take advantage of any optimization based on the groups, and any optimization on the selection predicates. For example, with our approach the Database Management System can use a sorted index scan for fetching at once all the blocks of the fact table storing tuples with the groups of the outer reviolation and, thus, reducing the tuples to sort. With Lateral NNJs, instead, groups are processed individually, and blocks storing tuples of different groups are fetched multiple times. With our approach the selection can be pushed down before the join if it is selective, or evaluated on the fly while computing the join if it’s not. With an indexed NNJ, instead, selection push down causes a nested loop which makes the NNJ inefficient due to the quadratic number of pairs checked. We applied our findings and implemented our approach into the kernel of the open-source database system PostgreSQL. We then introduce a novel partitioning technique, namely Disjoint Interval Partitioning (DIP), for efficiently computing sort-merge computations on interval data. While current partitioning techniques try to place tuples with similar intervals into the same partitions, DIP does exactly the opposite: it puts tuples that do not overlap into the same partitions. This yields more merges between partitions but each of those no longer requires a nested-loop but can be performed more efficiently using sort-merge. Since DIP outputs the partitions with their elements already sorted, applying a temporal operator to two DIP partitions is performed in linear time, in contrast to the quadratic time of the state of the art solutions. We illustrate the generality of our approach by describing the implementation of three basic database operators: join, anti-join, and aggregation. Extensive analytical evaluations confirm the efficiency of the solutions presented in this thesis. We experimentally compare our solutions to the state of the art approaches using real-world and synthetic temporal data. v Die Sortierung ist, zusammen mit der Partitionierung und der Indexierung, eines der Kernparadigmen, auf der die Verarbeitung von Anfragen durch Datanbanksysteme beruht. Sie wird unter anderem für die effiziente Berechnung von Joins, Anti-Joins, Nearest Neighbour Joins (NNJs) und Aggregationen angewandt. Die Effizienz der Sortierung rührt daher, dass nach ihr lediglich ein sequenzieller Scan zweier sortierter Relationen für die Beantwortung der eingangs erwähnten Anfragen durchgeführt werden muss und auf Tupel, welche nicht Bestandteil des Ergebnisses sind, nicht mehrfach zugegriffen wird. Allerdings verlieren Ansätze, die auf der Sortierung basieren, ihre Effizienz bei Anfragen über zeitabhängigen Daten: i) bei Zeitintervallen wird beim Zurückgreifen auf vorgängig zugegriffene und immer noch gültige Tupel erneut auf inzwischen ungültige und in der Ergebnismenge nicht enthaltene Tupel zugegriffen; ii) bei Zeitpunkten wird bei der Berechnung von NNJs mit Attributgruppierung auf Blöcke mit Tupeln verschiedener Gruppen mehrfach zugegriffen. Das Ziel dieser Arbeit besteht in der Weiterentwicklung von Datenbanksystemen hinsichtlich der effizienten Verarbeitung von Sort-Merge-Berechnungen in den obengenannten Fällen. Zuerst stellen wir einen neuen Operator für die Berechnung von NNJ-Abfragen mit integrierter Unterstützung von Attributgruppierung und Auswahlprädikaten vor. Sein Evaluationsbaum vermeidet erfolglose und redundante Zugriffe auf Daten, welche die hauptsächlichen Engpässe in der Performanz von aktuellen NNJ-Lösungen darstellen. Wir zeigen, dass im Gegensatz zu herkömmlichen Lösungen, die keine Attributgruppierungen und Auswahlprädikate unterstützen, vi) unser Ansatz die Möglichkeiten des Abfrageoptimierers signifikant erweitert: Abfragebäume, die unseren Ansatz anwenden, profitieren von sämtlichen Optimierungen aus dem Einsatz von Attributgruppierungen und Auswahlprädikaten. Beispielsweise können Datenbanksysteme mit unserem Ansatz einen sortierten Indexscan einsetzen, der genau einmal auf einen Block der Fak- tentabelle zugreift, der Tupel mit den Gruppen der äusseren Relation speichert, und dadurch die Anzahl der zu sortierenden Tupel verringert. Im Unterschied dazu werden mit gängigen lateralen NNJs die Gruppen einzeln verarbeitet und auf Blöcke, die Tupel verschiedener Gruppen beinhalten, wird mehrmals zugegriffen. Mit unserem Ansatz kann die Selektion bereits vor dem Join ausgewertet werden, sofern sie selektiv ist, oder die Selektion kann während des Scans der Da- ten ausgwertet werden. Mit einem indexierten NNJ führt eine standardmässig frühe Auswertung der Selektionsbedingung (selection push down) zu einer geschachtelten Schleife (nested loop), was den NNJ auf Grund der quadratischen Anzahl zu prüfender Paare ineffizient macht. Wir haben die gewonnenen Erkenntnisse angewandt und unseren Ansatz im Kern des Open Source Datenbanksystems PostgreSQL umgesetzt. Wir führen eine neue Art der Partitionierung, nämlich Disjoint Interval Partitioning (DIP), zur effizienten Verarbeitung von Sort-Merge-Berechnungen auf Intervalldaten ein. Aktuelle Ansätze zur Partitionierung versuchen Tupel mit ähnlichen Intervallen in dieselbe Partition zu packen. Unser Ansatz macht genau das Gegenteil: er weist nicht-überlappende Tupel denselben Partitionen zu. Dies führt zu mehr Kombinationen von Partitionen, aber da jede dieser Kom- binationen keine geschachtelte Schleife erfordert, können Sort-Merge-Berechnungen effizienter durchgeführt werden. Da DIP die Elemente bereits sortiert ausgibt, kann ein Operator auf zwei DIP-Partitionen in linearer Zeit durchgeführt werden, im Unterschied zur quadratischen Zeit herkömmlicher Lösungen. Wir zeigen die Allgemeingültigkeit unseres Ansatzes auf, indem wir die Umsetzung von drei Datenbankoperatoren beschreiben: Join, Anti-Join und Aggregation. Umfassende analytische Auswertungen bestätigen die Effizienz der in dieser Arbeit vorgestellten Lösungen. Wir vergleichen unsere Lösungen mit aktuellen Ansätzen mit echten und synthetischen zeitabhängigen Daten
    corecore