522 research outputs found

    Computational Particle Physics for Event Generators and Data Analysis

    Full text link
    High-energy physics data analysis relies heavily on the comparison between experimental and simulated data as stressed lately by the Higgs search at LHC and the recent identification of a Higgs-like new boson. The first link in the full simulation chain is the event generation both for background and for expected signals. Nowadays event generators are based on the automatic computation of matrix element or amplitude for each process of interest. Moreover, recent analysis techniques based on the matrix element likelihood method assign probabilities for every event to belong to any of a given set of possible processes. This method originally used for the top mass measurement, although computing intensive, has shown its power at LHC to extract the new boson signal from the background. Serving both needs, the automatic calculation of matrix element is therefore more than ever of prime importance for particle physics. Initiated in the eighties, the techniques have matured for the lowest order calculations (tree-level), but become complex and CPU time consuming when higher order calculations involving loop diagrams are necessary like for QCD processes at LHC. New calculation techniques for next-to-leading order (NLO) have surfaced making possible the generation of processes with many final state particles (up to 6). If NLO calculations are in many cases under control, although not yet fully automatic, even higher precision calculations involving processes at 2-loops or more remain a big challenge. After a short introduction to particle physics and to the related theoretical framework, we will review some of the computing techniques that have been developed to make these calculations automatic. The main available packages and some of the most important applications for simulation and data analysis, in particular at LHC will also be summarized.Comment: 19 pages, 11 figures, Proceedings of CCP (Conference on Computational Physics) Oct. 2012, Osaka (Japan) in IOP Journal of Physics: Conference Serie

    On confinement in a light-cone Hamiltonian for QCD

    Get PDF
    The canonical front form Hamiltonian for non-Abelian SU(N) gauge theory in 3+1 dimensions and in the light-cone gauge is mapped non-perturbatively on an effective Hamiltonian which acts only in the Fock space of a quark and an antiquark. Emphasis is put on the many-body aspects of gauge field theory, and it is shown explicitly how the higher Fock-space amplitudes can be retrieved self-consistently from solutions in the qqˉq\bar q-space. The approach is based on the novel method of iterated resolvents and on discretized light-cone quantization driven to the continuum limit. It is free of the usual perturbative Tamm-Dancoff truncations in particle number and coupling constant and respects all symmetries of the Lagrangian including covariance and gauge invariance. Approximations are done to the non-truncated formalism. Together with vertex as opposed to Fock-space regularization, the method allows to apply the renormalization programme non-perturbatively to a Hamiltonian. The conventional QCD scale is found arising from regulating the transversal momenta. It conspires with additional mass scales to produce possibly confinement.Comment: 15 pages, LaTeX2e, macro svjour included in uu-file 5 figures, ps-files included in uu-fil

    Providing Information by Resource- Constrained Data Analysis

    Get PDF
    The Collaborative Research Center SFB 876 (Providing Information by Resource-Constrained Data Analysis) brings together the research fields of data analysis (Data Mining, Knowledge Discovery in Data Bases, Machine Learning, Statistics) and embedded systems and enhances their methods such that information from distributed, dynamic masses of data becomes available anytime and anywhere. The research center approaches these problems with new algorithms respecting the resource constraints in the different scenarios. This Technical Report presents the work of the members of the integrated graduate school

    The Large Quasar Reference Frame (LQRF) - an optical representation of the ICRS

    Full text link
    The large number and all-sky distribution of quasars from different surveys, along with their presence in large, deep astrometric catalogs,enables the building of an optical materialization of the ICRS following its defining principles. Namely: that it is kinematically non-rotating with respect to the ensemble of distant extragalactic objects; aligned with the mean equator and dynamical equinox of J2000; and realized by a list of adopted coordinates of extragalatic sources. Starting from the updated and presumably complete LQAC list of QSOs, the initial optical positions of those quasars are found in the USNO B1.0 and GSC2.3 catalogs, and from the SDSS DR5. The initial positions are next placed onto UCAC2-based reference frames, following by an alignment with the ICRF, to which were added the most precise sources from the VLBA calibrator list and the VLA calibrator list - when reliable optical counterparts exist. Finally, the LQRF axes are inspected through spherical harmonics, contemplating to define right ascension, declination and magnitude terms. The LQRF contains J2000 referred equatorial coordinates for 100,165 quasars, well represented across the sky, from -83.5 to +88.5 degrees in declination, and with 10 arcmin being the average distance between adjacent elements. The global alignment with the ICRF is 1.5 mas, and the individual position accuracies are represented by a Poisson distribution that peaks at 139 mas in right ascension and 130 mas in declination. It is complemented by redshift and photometry information from the LQAC. The LQRF is designed to be an astrometric frame, but it is also the basis for the GAIA mission initial quasars' list, and can be used as a test bench for quasars' space distribution and luminosity function studies.Comment: 23 pages, 23 figures, 6 tables Accepted for publication by Astronomy & Astrophysics, on 25 May 200

    VNI-3.1: MC-simulation program to study high-energy particle collisions in QCD by space-time evolution of parton-cascades and parton-hadron conversion

    Get PDF
    VNI is a general-purpose Monte-Carlo event-generator, which includes the simulation of lepton-lepton, lepton-hadron, lepton-nucleus, hadron-hadron, hadron-nucleus, and nucleus-nucleus collisions. On the basis of renormalization-group improved parton description and quantum-kinetic theory, it uses the real-time evolution of parton cascades in conjunction with a self-consistent hadronization scheme that is governed by the dynamics itself. The causal evolution from a specific initial state (determined by the colliding beam particles) is followed by the time-development of the phase-space densities of partons, pre-hadronic parton clusters, and final-state hadrons, in position-space, momentum-space and color-space. The parton-evolution is described in terms of a space-time generalization of the familiar momentum-space description of multipl (semi) hard interactions in QCD, involving 2 -> 2 parton collisions, 2 -> 1 parton fusion processes, and 1 -> 2 radiation processes. The formation of color-singlet pre-hadronic clusters and their decays into hadrons, on the other hand, is treated by using a spatial criterion motivated by confinement and a non-perturbative model for hadronization. This article gives a brief review of the physics underlying VNI, which is followed by a detailed description of the program itself. The latter program description emphasizes easy-to-use pragmatism and explains how to use the program (including a simple example), annotates input and control parameters, and discusses output data provided by it.Comment: revised version, to appear in Computer Physics Communication

    Enhancing In-Memory Spatial Indexing with Learned Search

    Get PDF
    Spatial data is ubiquitous. Massive amounts of data are generated every day from a plethora of sources such as billions of GPS-enableddevices (e.g., cell phones, cars, and sensors), consumer-based applications (e.g., Uber and Strava), and social media platforms (e.g.,location-tagged posts on Facebook, Twitter, and Instagram). This exponential growth in spatial data has led the research communityto build systems and applications for efficient spatial data processing.In this study, we apply a recently developed machine-learned search technique for single-dimensional sorted data to spatial indexing.Specifically, we partition spatial data using six traditional spatial partitioning techniques and employ machine-learned search withineach partition to support point, range, distance, and spatial join queries. Adhering to the latest research trends, we tune the partitioningtechniques to be instance-optimized. By tuning each partitioning technique for optimal performance, we demonstrate that: (i) grid-basedindex structures outperform tree-based index structures (from 1.23× to 2.47×), (ii) learning-enhanced variants of commonly used spatialindex structures outperform their original counterparts (from 1.44× to 53.34× faster), (iii) machine-learned search within a partitionis faster than binary search by 11.79% - 39.51% when filtering on one dimension, (iv) the benefit of machine-learned search diminishesin the presence of other compute-intensive operations (e.g. scan costs in higher selectivity queries, Haversine distance computation, andpoint-in-polygon tests), and (v) index lookup is the bottleneck for tree-based structures, which could potentially be reduced by linearizingthe indexed partitions.Additional Key Words and Phrases: spatial data, indexing, machine-learning, spatial queries, geospatia

    Finite element analysis of prestressed concrete slabs under impact loading

    Get PDF
    Many structures use prestressed concrete slabs as roof or floor elements. Such elements can be impacted by precast walls during construction work, and safety on construction sites carries utmost importance. For impact problems, the use of empirical formulae is often sufficient for a conservative structural dimensioning. However, if the task is to obtain an accurate computational response that matches experimentally observed results, then empirical formulae and even most off-the-shelf material models readily available in commercial finite element methods software fail to produce consistently satisfactory results. This study conducts a finite element analysis on the effects of collisions between prestressed concrete slabs and precast wall elements to take precautionary measures for such events. This thesis adopted the Enhanced Concrete Damage Plasticity model to accurately capture the dynamic response of concrete in Abaqus/Explicit used for the three-dimensional finite element modeling. The material model requires user-defined features, including user-defined subroutines. The study involved a sensitivity analysis with respect to internal concrete model parameters to obtain a set of values that can be used universally in any hard missile impact simulation on concrete. The finite element models with the chosen set of parameters are validated with the experimental data provided by VTT in Finland. The validation involved the comparison of the impactor’s residual velocities and the ultimate capacities of the struck slab, which yielded a good overall agreement. The developed models were used to investigate 15 concrete-to-concrete collision scenarios on 200 and 300 mm prestressed concrete slabs hit by precast walls. The scenarios included impact velocities of 10.85 and 18.80 m/s with impactor masses of 3, 5, and 10 t and two different angles of strikes. The finite element study showed that the construction work under the slabs must be halted during the installation of precast wall elements in these conditions
    corecore