194 research outputs found

    Efficient Computation of Hash Functions

    Get PDF
    The performance of hash function computations can impose a significant workload on SSL/TLS authentication servers. In the WLCG this workload shows also in the computation of data transfers checksums. It has been shown in the EGI grid infrastructure that the checksum computation can double the IO load for large file transfers leading to an increase in re-transfers and timeout errors. Storage managers like SToRM try to reduce that impact by computing the checksum during the transfer. That may not be feasible, however, when multiple transfer streams are combined with the use of hashes like MD-5 or SHA-2. We present two alternatives to reduce the hash computation load. First we introduce implementations for the Fast SHA-256 and SHA-512 that can reduce the number of cycles per second of a hash computation from 15 to under 11. Secondly we introduce and evaluate parallel implementations for two novel hash tree functions: NIST SHA-3 Keccak and Skein. These functions were conceived to take advantage of parallel data transfers and their deployment can significantly reduce the timeout and re-transfer errors mentioned above

    Non-parametric comparison of histogrammed two-dimensional data distributions using the Energy Test

    Get PDF
    When monitoring complex experiments, comparison is often made between regularly acquired histograms of data and reference histograms which represent the ideal state of the equipment. With the larger HEP experiments now ramping up, there is a need for automation of this task since the volume of comparisons could overwhelm human operators. However, the two-dimensional histogram comparison tools available in ROOT have been noted in the past to exhibit shortcomings. We discuss a newer comparison test for two-dimensional histograms, based on the Energy Test of Aslan and Zech, which provides more conclusive discrimination between histograms of data coming from different distributions than methods provided in a recent ROOT release.The Science and Technology Facilities Council, U

    Efficient computation of hashes

    Get PDF
    The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced

    A well-separated pairs decomposition algorithm for k-d trees implemented on multi-core architectures

    Get PDF
    Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.Variations of k-d trees represent a fundamental data structure used in Computational Geometry with numerous applications in science. For example particle track tting in the software of the LHC experiments, and in simulations of N-body systems in the study of dynamics of interacting galaxies, particle beam physics, and molecular dynamics in biochemistry. The many-body tree methods devised by Barnes and Hutt in the 1980s and the Fast Multipole Method introduced in 1987 by Greengard and Rokhlin use variants of k-d trees to reduce the computation time upper bounds to O(n log n) and even O(n) from O(n2). We present an algorithm that uses the principle of well-separated pairs decomposition to always produce compressed trees in O(n log n) work. We present and evaluate parallel implementations for the algorithm that can take advantage of multi-core architectures.The Science and Technology Facilities Council, UK

    Parallel Monte Carlo Search for Hough Transform

    Get PDF
    We investigate the problem of line detection in digital image processing and in special how state of the art algorithms behave in the presence of noise and whether CPU efficiency can be improved by the combination of a Monte Carlo Tree Search, hierarchical space decomposition, and parallel computing. The starting point of the investigation is the method introduced in 1962 by Paul Hough for detecting lines in binary images. Extended in the 1970s to the detection of space forms, what came to be known as Hough Transform (HT) has been proposed, for example, in the context of track fitting in the LHC ATLAS and CMS projects. The Hough Transform transfers the problem of line detection, for example, into one of optimization of the peak in a vote counting process for cells which contain the possible points of candidate lines. The detection algorithm can be computationally expensive both in the demands made upon the processor and on memory. Additionally, it can have a reduced effectiveness in detection in the presence of noise. Our first contribution consists in an evaluation of the use of a variation of the Radon Transform as a form of improving theeffectiveness of line detection in the presence of noise. Then, parallel algorithms for variations of the Hough Transform and the Radon Transform for line detection are introduced. An algorithm for Parallel Monte Carlo Search applied to line detection is also introduced. Their algorithmic complexities are discussed. Finally, implementations on multi-GPU and multicore architectures are discussed

    Assessment of a novel, capsid-modified adenovirus with an improved vascular gene transfer profile

    Get PDF
    <p>Background: Cardiovascular disorders, including coronary artery bypass graft failure and in-stent restenosis remain significant opportunities for the advancement of novel therapeutics that target neointimal hyperplasia, a characteristic of both pathologies. Gene therapy may provide a successful approach to improve the clinical outcome of these conditions, but would benefit from the development of more efficient vectors for vascular gene delivery. The aim of this study was to assess whether a novel genetically engineered Adenovirus could be utilised to produce enhanced levels of vascular gene expression.</p> <p>Methods: Vascular transduction capacity was assessed in primary human saphenous vein smooth muscle and endothelial cells using vectors expressing the LacZ reporter gene. The therapeutic capacity of the vectors was compared by measuring smooth muscle cell metabolic activity and migration following infection with vectors that over-express the candidate therapeutic gene tissue inhibitor of matrix metalloproteinase-3 (TIMP-3).</p> <p>Results: Compared to Adenovirus serotype 5 (Ad5), the novel vector Ad5T*F35++ demonstrated improved binding and transduction of human vascular cells. Ad5T*F35++ mediated expression of TIMP-3 reduced smooth muscle cell metabolic activity and migration in vitro. We also demonstrated that in human serum samples pre-existing neutralising antibodies to Ad5T*F35++ were less prevalent than Ad5 neutralising antibodies.</p> <p>Conclusions: We have developed a novel vector with improved vascular transduction and improved resistance to human serum neutralisation. This may provide a novel vector platform for human vascular gene transfer.</p&gt

    Avaliação de cultivares de soja, sob manejo orgânico, para fins de adubação verde e produção de grãos.

    Get PDF
    O objetivo deste trabalho foi avaliar o desempenho de seis cultivares de soja, sob manejo orgânico, para fins de adubação verde e produção de grãos. Utilizou-se delineamento experimental de blocos casualizados, com quatro repetições por tratamento (cultivar). Na época da colheita, 81 dias após a emergência das plântulas, todas as cultivares testadas (Celeste, Surubi, Campo Grande, Mandi, Lambari e Taquari) mostraram excelente nodulação, variando de 545 a 760 mg/planta de massa nodular seca. As cultivares Celeste e Taquari, que produziram, respectivamente, 8,33 e 7,12 t ha-1 de biomassa seca da parte aérea, apresentaram outras características agronômicas vantajosas, tais como: ciclo curto, alta acumulação de nutrientes (N, P, K, Ca e Mg) nos tecidos verdes e bom rendimento de sementes. Esses caracteres indicam potencial de 'Celeste' e 'Taquari' para adubação verde de verão em sistemas de agricultura orgânica. Cinco das cultivares avaliadas revelaram tendência ao acamamento, porém dentro de níveis aceitáveis. As cultivares Celeste, Surubi, Campo Grande, Mandi e Taquari suplantaram em 23%, 32%, 33%, 44% e 70%, respectivamente, a média nacional de produtividade de soja, estimada em 2.398 kg ha-1 nas últimas três safras

    Measurement of the cosmic ray spectrum above 4×10184{\times}10^{18} eV using inclined events detected with the Pierre Auger Observatory

    Full text link
    A measurement of the cosmic-ray spectrum for energies exceeding 4×10184{\times}10^{18} eV is presented, which is based on the analysis of showers with zenith angles greater than 6060^{\circ} detected with the Pierre Auger Observatory between 1 January 2004 and 31 December 2013. The measured spectrum confirms a flux suppression at the highest energies. Above 5.3×10185.3{\times}10^{18} eV, the "ankle", the flux can be described by a power law EγE^{-\gamma} with index γ=2.70±0.02(stat)±0.1(sys)\gamma=2.70 \pm 0.02 \,\text{(stat)} \pm 0.1\,\text{(sys)} followed by a smooth suppression region. For the energy (EsE_\text{s}) at which the spectral flux has fallen to one-half of its extrapolated value in the absence of suppression, we find Es=(5.12±0.25(stat)1.2+1.0(sys))×1019E_\text{s}=(5.12\pm0.25\,\text{(stat)}^{+1.0}_{-1.2}\,\text{(sys)}){\times}10^{19} eV.Comment: Replaced with published version. Added journal reference and DO

    Energy Estimation of Cosmic Rays with the Engineering Radio Array of the Pierre Auger Observatory

    Full text link
    The Auger Engineering Radio Array (AERA) is part of the Pierre Auger Observatory and is used to detect the radio emission of cosmic-ray air showers. These observations are compared to the data of the surface detector stations of the Observatory, which provide well-calibrated information on the cosmic-ray energies and arrival directions. The response of the radio stations in the 30 to 80 MHz regime has been thoroughly calibrated to enable the reconstruction of the incoming electric field. For the latter, the energy deposit per area is determined from the radio pulses at each observer position and is interpolated using a two-dimensional function that takes into account signal asymmetries due to interference between the geomagnetic and charge-excess emission components. The spatial integral over the signal distribution gives a direct measurement of the energy transferred from the primary cosmic ray into radio emission in the AERA frequency range. We measure 15.8 MeV of radiation energy for a 1 EeV air shower arriving perpendicularly to the geomagnetic field. This radiation energy -- corrected for geometrical effects -- is used as a cosmic-ray energy estimator. Performing an absolute energy calibration against the surface-detector information, we observe that this radio-energy estimator scales quadratically with the cosmic-ray energy as expected for coherent emission. We find an energy resolution of the radio reconstruction of 22% for the data set and 17% for a high-quality subset containing only events with at least five radio stations with signal.Comment: Replaced with published version. Added journal reference and DO
    corecore