57 research outputs found

    Ultraviolet Complete Electroweak Model Without a Higgs Particle

    Full text link
    An electroweak model with running coupling constants described by an energy dependent entire function is utraviolet complete and avoids unitarity violations for energies above 1 TeV. The action contains no physical scalar fields and no Higgs particle and the physical electroweak model fields are local and satisfy microcausality. The WW and ZZ masses are compatible with a symmetry breaking SU(2)L×U(1)Y→U(1)emSU(2)_L\times U(1)_Y \rightarrow U(1)_{\rm em}, which retains a massless photon. The vertex couplings possess an energy scale ΛW>1\Lambda_W > 1 TeV predicting scattering amplitudes that can be tested at the LHC.Comment: 19 pages, no figures, LaTex file. Equation and text corrected. Reference added. Results remain the same. Final version published in European Physics Journal Plus, 126 (2011

    Constrained Gauge Fields from Spontaneous Lorentz Violation

    Get PDF
    Spontaneous Lorentz violation realized through a nonlinear vector field constraint of the type AΌAΌ=M2A_{\mu}A^{\mu}=M^{2} (MM is the proposed scale for Lorentz violation) is shown to generate massless vector Goldstone bosons, gauging the starting global internal symmetries in arbitrary relativistically invariant theories. The gauge invariance appears in essence as a necessary condition for these bosons not to be superfluously restricted in degrees of freedom, apart from the constraint due to which the true vacuum in a theory is chosen by the Lorentz violation. In the Abelian symmetry case the only possible theory proves to be QED with a massless vector Goldstone boson naturally associated with the photon, while the non-Abelian symmetry case results in a conventional Yang-Mills theory. These theories, both Abelian and non-Abelian, look essentially nonlinear and contain particular Lorentz (and CPTCPT) violating couplings when expressed in terms of the pure Goldstone vector modes. However, they do not lead to physical Lorentz violation due to the simultaneously generated gauge invariance.Comment: 15 pages, minor corrections, version to be published in Nucl. Phys.

    2d Stringy Black Holes and Varying Constants

    Full text link
    Motivated by the recent interest on models with varying constants and whether black hole physics can constrain such theories, two-dimensional charged stringy black holes are considered. We exploit the role of two-dimensional stringy black holes as toy models for exploring paradoxes which may lead to constrains on a theory. A two-dimensional charged stringy black hole is investigated in two different settings. Firstly, the two-dimensional black hole is treated as an isolated object and secondly, it is contained in a thermal environment. In both cases, it is shown that the temperature and the entropy of the two-dimensional charged stringy black hole are decreased when its electric charge is increased in time. By piecing together our results and previous ones, we conclude that in the context of black hole thermodynamics one cannot derive any model independent constraints for the varying constants. Therefore, it seems that there aren't any varying constant theories that are out of favor with black hole thermodynamics.Comment: 12 pages, LaTeX, to appear in JHE

    Spontaneous Lorentz Violation: Non-Abelian Gauge Fields as Pseudo-Goldstone Vector Bosons

    Get PDF
    We argue that non-Abelian gauge fields can be treated as the pseudo-Goldstone vector bosons caused by spontaneous Lorentz invariance violation (SLIV). To this end, the SLIV which evolves in a general Yang-Mills type theory with the nonlinear vector field constraint Tr(% \boldsymbol{A}_{\mu }\boldsymbol{A}^{\mu})=\pm M^{2} (MM is a proposed SLIV scale) imposed is considered in detail. With an internal symmetry group GG having DD generators not only the pure Lorentz symmetry SO(1,3), but the larger accidental symmetry SO(D,3D)SO(D,3D) of the SLIV constraint in itself appears to be spontaneously broken as well. As a result, while the pure Lorentz violation still generates only one genuine Goldstone vector boson, the accompanying pseudo-Goldstone vector bosons related to the SO(D,3D)SO(D,3D) breaking also come into play in the final arrangement of the entire Goldstone vector field multiplet. Remarkably, they remain strictly massless, being protected by gauge invariance of the Yang-Mills theory involved. We show that, although this theory contains a plethora of Lorentz and CPTCPT violating couplings, they do not lead to physical SLIV effects which turn out to be strictly cancelled in all the lowest order processes considered. However, the physical Lorentz violation could appear if the internal gauge invariance were slightly broken at very small distances influenced by gravity. For the SLIV scale comparable with the Planck one the Lorentz violation could become directly observable at low energies.Comment: Invited talk given at Caucasian-German School and Workshop in Hadron Physics (4-7 September 2006, Tbilisi, Georgia

    Gap-filling eddy covariance methane fluxes:Comparison of machine learning model predictions and uncertainties at FLUXNET-CH4 wetlands

    Get PDF
    Time series of wetland methane fluxes measured by eddy covariance require gap-filling to estimate daily, seasonal, and annual emissions. Gap-filling methane fluxes is challenging because of high variability and complex responses to multiple drivers. To date, there is no widely established gap-filling standard for wetland methane fluxes, with regards both to the best model algorithms and predictors. This study synthesizes results of different gap-filling methods systematically applied at 17 wetland sites spanning boreal to tropical regions and including all major wetland classes and two rice paddies. Procedures are proposed for: 1) creating realistic artificial gap scenarios, 2) training and evaluating gap-filling models without overstating performance, and 3) predicting half-hourly methane fluxes and annual emissions with realistic uncertainty estimates. Performance is compared between a conventional method (marginal distribution sampling) and four machine learning algorithms. The conventional method achieved similar median performance as the machine learning models but was worse than the best machine learning models and relatively insensitive to predictor choices. Of the machine learning models, decision tree algorithms performed the best in cross-validation experiments, even with a baseline predictor set, and artificial neural networks showed comparable performance when using all predictors. Soil temperature was frequently the most important predictor whilst water table depth was important at sites with substantial water table fluctuations, highlighting the value of data on wetland soil conditions. Raw gap-filling uncertainties from the machine learning models were underestimated and we propose a method to calibrate uncertainties to observations. The python code for model development, evaluation, and uncertainty estimation is publicly available. This study outlines a modular and robust machine learning workflow and makes recommendations for, and evaluates an improved baseline of, methane gap-filling models that can be implemented in multi-site syntheses or standardized products from regional and global flux networks (e.g., FLUXNET)

    The Sudbury Neutrino Observatory

    Full text link
    The Sudbury Neutrino Observatory is a second generation water Cherenkov detector designed to determine whether the currently observed solar neutrino deficit is a result of neutrino oscillations. The detector is unique in its use of D2O as a detection medium, permitting it to make a solar model-independent test of the neutrino oscillation hypothesis by comparison of the charged- and neutral-current interaction rates. In this paper the physical properties, construction, and preliminary operation of the Sudbury Neutrino Observatory are described. Data and predicted operating parameters are provided whenever possible.Comment: 58 pages, 12 figures, submitted to Nucl. Inst. Meth. Uses elsart and epsf style files. For additional information about SNO see http://www.sno.phy.queensu.ca . This version has some new reference

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    Get PDF
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    Differentiating code from data in x86 binaries

    No full text
    Abstract. Robust, static disassembly is an important part of achieving high coverage for many binary code analyses, such as reverse engineering, malware analysis, reference monitor in-lining, and software fault isolation. However, one of the major difficulties current disassemblers face is differentiating code from data when they are interleaved. This paper presents a machine learning-based disassembly algorithm that segments an x86 binary into subsequences of bytes and then classifies each subsequence as code or data. The algorithm builds a language model from a set of pre-tagged binaries using a statistical data compression technique. It sequentially scans a new binary executable and sets a breaking point at each potential code-to-code and code-to-data/data-to-code transition. The classification of each segment as code or data is based on the minimum cross-entropy. Experimental results are presented to demonstrate the effectiveness of the algorithm

    Cache-, Hash- and Space-Efficient Bloom Filters

    No full text
    • 

    corecore