5,102 research outputs found

    Finite-size scaling and deconfinement transition: the case of 4D SU(2) pure gauge theory

    Get PDF
    A recently introduced method for determining the critical indices of the deconfinement transition in gauge theories, already tested for the case of 3D SU(3) pure gauge theory, is applied here to 4D SU(2) pure gauge theory. The method is inspired by universality and based on the finite size scaling behavior of the expectation value of simple lattice operators, such as the plaquette. We obtain an accurate determination of the critical index ν\nu, in agreement with the prediction of the Svetitsky-Yaffe conjecture.Comment: 11 pages, 3 eps figure

    A new veto for continuous gravitational wave searches

    Full text link
    We present a new veto procedure to distinguish between continuous gravitational wave (CW) signals and the detector artifacts that can mimic their behavior. The veto procedure exploits the fact that a long-lasting coherent disturbance is less likely than a real signal to exhibit a Doppler modulation of astrophysical origin. Therefore, in the presence of an outlier from a search, we perform a multi-step search around the frequency of the outlier with the Doppler modulation turned off (DM-off), and compare these results with the results from the original (DM-on) search. If the results from the DM-off search are more significant than those from the DM-on search, the outlier is most likely due to an artifact rather than a signal. We tune the veto procedure so that it has a very low false dismissal rate. With this veto, we are able to identify as coherent disturbances >99.9% of the 6349 candidates from the recent all-sky low-frequency Einstein@Home search on the data from the Advanced LIGO O1 observing run [1]. We present the details of each identified disturbance in the Appendix.Comment: 10 pages, 6 figures, 2 table

    Dynamic versus Static Structure Functions and Novel Diffractive Effects in QCD

    Get PDF
    Initial- and final-state rescattering, neglected in the parton model, have a profound effect in QCD hard-scattering reactions, predicting single-spin asymmetries, diffractive deep inelastic scattering, diffractive hard hadronic reactions, the breakdown of the Lam Tung relation in Drell-Yan reactions, and nuclear shadowing and non-universal antishadowing--leading-twist physics not incorporated in the light-front wavefunctions of the target computed in isolation. I also discuss the use of diffraction to materialize the Fock states of a hadronic projectile and test QCD color transparency, and anomalous heavy quark effects. The presence of direct higher-twist processes where a proton is produced in the hard subprocess can explain the large proton-to-pion ratio seen in high centrality heavy ion collisions. I emphasize the importance of distinguishing between static observables such as the probability distributions computed from the square of the light-front wavefunctions versus dynamical observables which include the effects of rescattering.Comment: 8 pages, 1 figure. Presented at Diffraction 2008: International Workshop On Diffraction In High Energy Physics 9-14 Sep 2008, La Londe-les-Maures, Franc

    Random template banks and relaxed lattice coverings

    Full text link
    Template-based searches for gravitational waves are often limited by the computational cost associated with searching large parameter spaces. The study of efficient template banks, in the sense of using the smallest number of templates, is therefore of great practical interest. The "traditional" approach to template-bank construction requires every point in parameter space to be covered by at least one template, which rapidly becomes inefficient at higher dimensions. Here we study an alternative approach, where any point in parameter space is covered only with a given probability < 1. We find that by giving up complete coverage in this way, large reductions in the number of templates are possible, especially at higher dimensions. The prime examples studied here are "random template banks", in which templates are placed randomly with uniform probability over the parameter space. In addition to its obvious simplicity, this method turns out to be surprisingly efficient. We analyze the statistical properties of such random template banks, and compare their efficiency to traditional lattice coverings. We further study "relaxed" lattice coverings (using Zn and An* lattices), which similarly cover any signal location only with probability < 1. The relaxed An* lattice is found to yield the most efficient template banks at low dimensions (n < 10), while random template banks increasingly outperform any other method at higher dimensions.Comment: 13 pages, 10 figures, submitted to PR

    Features of elastic scattering at small t at the LHC

    Full text link
    The problems linked with the extraction of the basic parameters of the hadron elastic scattering amplitude at the LHC are explored. It is shown that one should take into account the saturation regime which will lead to new effects at the LHC.Comment: 3. pages, 6 figures, talk on the International workshop on "Diffraction in High Energy Physics", La Londe-les-Maures, France (2008

    A nature-inspired feature selection approach based on hypercomplex information

    Get PDF
    Feature selection for a given model can be transformed into an optimization task. The essential idea behind it is to find the most suitable subset of features according to some criterion. Nature-inspired optimization can mitigate this problem by producing compelling yet straightforward solutions when dealing with complicated fitness functions. Additionally, new mathematical representations, such as quaternions and octonions, are being used to handle higher-dimensional spaces. In this context, we are introducing a meta-heuristic optimization framework in a hypercomplex-based feature selection, where hypercomplex numbers are mapped to real-valued solutions and then transferred onto a boolean hypercube by a sigmoid function. The intended hypercomplex feature selection is tested for several meta-heuristic algorithms and hypercomplex representations, achieving results comparable to some state-of-the-art approaches. The good results achieved by the proposed approach make it a promising tool amongst feature selection research

    Data analysis of gravitational-wave signals from spinning neutron stars. V. A narrow-band all-sky search

    Full text link
    We present theory and algorithms to perform an all-sky coherent search for periodic signals of gravitational waves in narrow-band data of a detector. Our search is based on a statistic, commonly called the F\mathcal{F}-statistic, derived from the maximum-likelihood principle in Paper I of this series. We briefly review the response of a ground-based detector to the gravitational-wave signal from a rotating neuron star and the derivation of the F\mathcal{F}-statistic. We present several algorithms to calculate efficiently this statistic. In particular our algorithms are such that one can take advantage of the speed of fast Fourier transform (FFT) in calculation of the F\mathcal{F}-statistic. We construct a grid in the parameter space such that the nodes of the grid coincide with the Fourier frequencies. We present interpolation methods that approximately convert the two integrals in the F\mathcal{F}-statistic into Fourier transforms so that the FFT algorithm can be applied in their evaluation. We have implemented our methods and algorithms into computer codes and we present results of the Monte Carlo simulations performed to test these codes.Comment: REVTeX, 20 pages, 8 figure

    Handling dropout probability estimation in convolution neural networks using meta-heuristics

    Get PDF
    Deep learning-based approaches have been paramount in recent years, mainly due to their outstanding results in several application domains, ranging from face and object recognition to handwritten digit identification. Convolutional Neural Networks (CNN) have attracted a considerable attention since they model the intrinsic and complex brain working mechanisms. However, one main shortcoming of such models concerns their overfitting problem, which prevents the network from predicting unseen data effectively. In this paper, we address this problem by means of properly selecting a regularization parameter known as Dropout in the context of CNNs using meta-heuristic-driven techniques. As far as we know, this is the first attempt to tackle this issue using this methodology. Additionally, we also take into account a default dropout parameter and a dropout-less CNN for comparison purposes. The results revealed that optimizing Dropout-based CNNs is worthwhile, mainly due to the easiness in finding suitable dropout probability values, without needing to set new parameters empirically

    No-Fault Insurance Fraud: An Overview

    Get PDF
    • …
    corecore