16,088 research outputs found

    A Large Hadron Electron Collider at CERN

    Full text link
    This document provides a brief overview of the recently published report on the design of the Large Hadron Electron Collider (LHeC), which comprises its physics programme, accelerator physics, technology and main detector concepts. The LHeC exploits and develops challenging, though principally existing, accelerator and detector technologies. This summary is complemented by brief illustrations of some of the highlights of the physics programme, which relies on a vastly extended kinematic range, luminosity and unprecedented precision in deep inelastic scattering. Illustrations are provided regarding high precision QCD, new physics (Higgs, SUSY) and electron-ion physics. The LHeC is designed to run synchronously with the LHC in the twenties and to achieve an integrated luminosity of O(100) fb−1^{-1}. It will become the cleanest high resolution microscope of mankind and will substantially extend as well as complement the investigation of the physics of the TeV energy scale, which has been enabled by the LHC

    Jets in Hadron-Hadron Collisions

    Full text link
    In this article, we review some of the complexities of jet algorithms and of the resultant comparisons of data to theory. We review the extensive experience with jet measurements at the Tevatron, the extrapolation of this acquired wisdom to the LHC and the differences between the Tevatron and LHC environments. We also describe a framework (SpartyJet) for the convenient comparison of results using different jet algorithms.Comment: 68 pages, 54 figure

    Film cooling from a row of holes supplemented with anti vortex holes

    Get PDF
    Film cooling is a technique employed to protect the external surface of gas turbine blades from the hot mainstream gas by ejecting the internal coolant air through discrete holes or slots at several locations on the blade exterior surface. Passing the coolant through conventional cylindrical holes causes a pair of vortices to form which lifts off the coolant jet instead of letting it adhere to the surface. The present study aims at investigating the enhanced cooling performance caused by addition of anti-vortex holes to the main cylindrical film cooling holes. Both heat transfer coefficient and film cooling effectiveness are determined experimentally downstream of the exit of the film cooling holes on a flat plate by a single test using the transient Infra Red thermography technique. A total of six different cases with variations in geometry and orientation of the anti vortex holes in relation to the main film cooling holes are thoroughly investigated. Results suggested that the presence of anti vortex holes mitigates the effect of the pair of anti vortices. When the anti vortex holes are nearer to the primary film cooling holes and are developing from the base of the primary holes, better film cooling is accomplished as compared to other anti vortex holes orientation. When the anti vortex holes are laid back in the upstream region, film cooling diminishes considerably. Although an enhancement in heat transfer coefficient is seen in cases with high film cooling effectiveness, the overall heat flux ratio as compared to standard cylindrical holes is much lower. Thus cases with anti vortex holes placed near the main holes certainly show promising results

    Geometry of enstrophy and dissipation, grid resolution effects and proximity issues in turbulence

    Get PDF
    We perform a multi-scale non-local geometrical analysis of the structures extracted from the enstrophy and kinetic energy dissipation-rate, instantaneous fields of a numerical database of incompressible homogeneous isotropic turbulence decaying in time obtained by DNS in a periodic box. Three different resolutions are considered: 256^3, 512^3 and 1024^3 grid points, with k_(max)η(overbar) approximately 1, 2 and 4, respectively, the same initial conditions and Re_λ ≈ 77. This allows a comparison of the geometry of the structures obtained for different resolutions. For the highest resolution, structures of enstrophy and dissipation evolve in a continuous distribution from blob-like and moderately stretched tube-like shapes at the large scales to highly stretched sheet-like structures at the small scales. The intermediate scales show a predominance of tube-like structures for both fields, much more pronounced for the enstrophy field. The dissipation field shows a tendency towards structures with lower curvedness than those of the enstrophy, for intermediate and small scales. The 256^3 grid resolution case (k_(max)η(overbar) ≈ 1) was unable to detect the predominance of highly stretched sheet-like structures at the smaller scales in both fields. The same non-local methodology for the study of the geometry of structures, but without the multi-scale decomposition, is applied to two scalar fields used by existing local criteria for the eduction of tube- and sheet-like structures in turbulence, Q and [A_ij]_+, respectively, obtained from invariants of the velocity-gradient tensor and alike in the 1024^3 case. This adds the non-local geometrical characterization and classification to those local criteria, assessing their validity in educing particular geometries. Finally, we introduce a new methodology for the study of proximity issues among structures of different fields, based on geometrical considerations and non-local analysis, by taking into account the spatial extent of the structures. We apply it to the four fields previously studied. Tube-like structures of Q are predominantly surrounded by sheet-like structures of [A_ij]_+, which appear at closer distances. For the enstrophy, tube-like structures at an intermediate scale are primarily surrounded by sheets of smaller scales of the enstrophy and structures of dissipation at the same and smaller scales. A secondary contribution results from tubes of enstrophy at smaller scales appearing at farther distances. Different configurations of composite structures are presented

    A Survey of Constrained Combinatorial Testing

    Get PDF
    Combinatorial Testing (CT) is a potentially powerful testing technique, whereas its failure revealing ability might be dramatically reduced if it fails to handle constraints in an adequate and efficient manner. To ensure the wider applicability of CT in the presence of constrained problem domains, large and diverse efforts have been invested towards the techniques and applications of constrained combinatorial testing. In this paper, we provide a comprehensive survey of representations, influences, and techniques that pertain to constraints in CT, covering 129 papers published between 1987 and 2018. This survey not only categorises the various constraint handling techniques, but also reviews comparatively less well-studied, yet potentially important, constraint identification and maintenance techniques. Since real-world programs are usually constrained, this survey can be of interest to researchers and practitioners who are looking to use and study constrained combinatorial testing techniques

    Test them all, is it worth it? Assessing configuration sampling on the JHipster Web development stack

    Get PDF
    Many approaches for testing configurable software systems start from the same assumption: it is impossible to test all configurations. This motivated the definition of variability-aware abstractions and sampling techniques to cope with large configuration spaces. Yet, there is no theoretical barrier that prevents the exhaustive testing of all configurations by simply enumerating them if the effort required to do so remains acceptable. Not only this: we believe there is a lot to be learned by systematically and exhaustively testing a configurable system. In this case study, we report on the first ever endeavour to test all possible configurations of the industry-strength, open source configurable software system JHipster, a popular code generator for web applications. We built a testing scaffold for the 26,000+ configurations of JHipster using a cluster of 80 machines during 4 nights for a total of 4,376 hours (182 days) CPU time. We find that 35.70% configurations fail and we identify the feature interactions that cause the errors. We show that sampling strategies (like dissimilarity and 2-wise): (1) are more effective to find faults than the 12 default configurations used in the JHipster continuous integration; (2) can be too costly and exceed the available testing budget. We cross this quantitative analysis with the qualitative assessment of JHipster’s lead developers.</p

    Using Software Testing Techniques to Infer Biological Models

    Get PDF
    Years of research in software testing has given us novel ways to reason about and test the behavior of complex software systems that contain hundreds of thousands of lines of code. Many of these techniques have been inspired by nature such as genetic algorithms, swarm intelligence, and ant colony optimization. However, they use a unidirectional analogy – taking from nature without giving back. In this thesis we invert this view and ask if we can utilize techniques from testing and modeling of highly-configurable software systems to aid in the emerging field of systems biology which aims to model and predict the behavior of biological organisms. Like configurable systems, the underlying source code (metabolic model) contains both common and variable code elements (reactions) that are executed only under particular configurations (environmental conditions), and these directly impact an organism’s observable behavior. We propose the use of sampling, classification, and modeling techniques commonly used in software testing and combine them into a process called BioSIMP which can lead to simplified models and biological predictions. We perform two case studies, the first of which explores and evaluates different classification techniques to infer influential factors in microbial organisms. We then compare several sampling methods to limit the number of experiments required in the laboratory. We show that we can reduce testing by more than two thirds without negatively impacting the quality of our models. Finally, we perform an end-to-end case study on BioSIMP using both laboratory and simulation data and show that we can find influencing environmental factors in two microbial organisms, some of which were previously unknown to biologists. Our findings suggest that the configurable-software analogy holds, and we can identify the variable and common regions of reactions that change with respect to the environment. Advisor: Myra B. Cohe
    • …
    corecore