249 research outputs found

    Fatigue tests of axially loaded butt welds up to very high cycles

    Get PDF
    Fatigue strength curves that are established from fatigue tests provide a basis for the fatigue assessment applying nominal stress approach. In the codes valid for steel structures, like the EC 3, the fatigue strength curves for constant amplitude loading have a knee point in the transition region. The fatigue strength curve beyond this knee point is commonly assumed to be a horizontal asymptote. However, the behaviour of the fatigue strength curve in the area of very high cycles and more importantly the existence of an endurance limit are much discussed. In the case of welded joints the experimental data beyond 107 load cycles is limited due to the possibilities in testing. Testing techniques with high frequencies are necessary to obtain experimental data with very high cycles in a reasonable period of time. In this scope a testing device with approximately 390 Hz operates by alternating current magnets and using resonance amplification, which was developed by a third party. This testing device was investigated and advanced for the application of long term tests reaching 5·108 load cycles. Fatigue tests on axially loaded butt welds with constant amplitude loading are conducted in three test series until very high cycles. The fatigue tests include the area of high and very high cycles. The influence of test frequency and stress ratio is investigated

    One-loop corrections to gaugino (co-)annihilation into quarks in the MSSM

    Full text link
    We present the full O(αs)\mathcal{O}(\alpha_s) supersymmetric QCD corrections for gaugino annihilation and co-annihilation into light and heavy quarks in the Minimal Supersymmetric Standard Model (MSSM). We demonstrate that these channels are phenomenologically relevant within the so-called phenomenological MSSM. We discuss selected technical details such as the dipole subtraction method in the case of light quarks and the treatment of the bottom quark mass and Yukawa coupling. Numerical results for the (co-)annihilation cross sections and the predicted neutralino relic density are presented. We show that the impact of including the radiative corrections on the cosmologically preferred region of the parameter space is larger than the current experimental uncertainty from Planck data.Comment: 19 pages, 9 figures. Matches version published in Phys.Rev.

    Interplay of gaugino (co)annihilation processes in the context of a precise relic density calculation

    Full text link
    The latest Planck data allow one to determine the dark matter relic density with previously unparalleled precision. In order to achieve a comparable precision on the theory side, we have calculated the full O(αs)\mathcal{O}(\alpha_s) corrections to the most relevant annihilation and coannihilation processes for relic density calculations within the Minimal Supersymmetric Standard Model (MSSM). The interplay of these processes is discussed. The impact of the radiative corrections on the resulting relic density is found to be larger than the experimental uncertainty of the Planck data.Comment: 7 pages, 2 figures, 18th International Conference From the Planck Scale to the Electroweak Scale, 25-29 May 2015. Ioannina, Greec

    Precision predictions for supersymmetric dark matter

    Full text link
    The dark matter relic density has been measured by Planck and its predecessors with an accuracy of about 2%. We present theoretical calculations with the numerical program DM@NLO in next-to-leading order SUSY QCD and beyond, which allow to reach this precision for gaugino and squark (co-)annihilations, and use them to scan the phenomenological MSSM for viable regions, applying also low-energy, electroweak and hadron collider constraints.Comment: 6 pages, 1 table, 8 figures, proceedings of ICHEP 201

    3D cut-cell modelling for high-resolution atmospheric simulations

    Full text link
    Owing to the recent, rapid development of computer technology, the resolution of atmospheric numerical models has increased substantially. With the use of next-generation supercomputers, atmospheric simulations using horizontal grid intervals of O(100) m or less will gain popularity. At such high resolution more of the steep gradients in mountainous terrain will be resolved, which may result in large truncation errors in those models using terrain-following coordinates. In this study, a new 3D Cartesian coordinate non-hydrostatic atmospheric model is developed. A cut-cell representation of topography based on finite-volume discretization is combined with a cell-merging approach, in which small cut-cells are merged with neighboring cells either vertically or horizontally. In addition, a block-structured mesh-refinement technique is introduced to achieve a variable resolution on the model grid with the finest resolution occurring close to the terrain surface. The model successfully reproduces a flow over a 3D bell-shaped hill that shows a good agreement with the flow predicted by the linear theory. The ability of the model to simulate flows over steep terrain is demonstrated using a hemisphere-shaped hill where the maximum slope angle is resolved at 71 degrees. The advantage of a locally refined grid around a 3D hill, with cut-cells at the terrain surface, is also demonstrated using the hemisphere-shaped hill. The model reproduces smooth mountain waves propagating over varying grid resolution without introducing large errors associated with the change of mesh resolution. At the same time, the model shows a good scalability on a locally refined grid with the use of OpenMP.Comment: 19 pages, 16 figures. Revised version, accepted for publication in QJRM

    Zum Ermüdungsverhalten von Stumpfnahtverbindungen bei sehr hohen Lastwechselzahlen

    Get PDF
    [no abstract

    Degeneration Effects of Thin-Film Sensors after Critical Load Conditions of Machine Components

    Get PDF
    In the context of intelligent components in industrial applications in the automotive, energy or construction sector, sensor monitoring is crucial for security issues and to avoid long and costly downtimes. This article discusses component-inherent thin-film sensors for this purpose, which, in contrast to conventional sensor technology, can be applied inseparably onto the component’s surface via sputtering, so that a maximum of information about the component’s condition can be generated, especially regarding deformation. This article examines whether the sensors can continue to generate reliable measurement data even after critical component loads have been applied. This extends their field of use concerning plastic deformation behavior. Therefore, any change in sensor properties is necessary for ongoing elastic strain measurements. These novel fundamentals are established for thin-film constantan strain gauges and platinum temperature sensors on steel substrates. In general, a k-factor decrease and an increase in the temperature coefficient of resistance with increasing plastic deformation could be observed until a sensor failure above 0.5% plastic deformation (constantan) occurred (1.3% for platinum). Knowing these values makes it possible to continue measuring elastic strains after critical load conditions on a machine component in terms of plastic deformation. Additionally, a method of sensor-data fusion for the clear determination of plastic deformation and temperature change is presented

    Requirements and problems in parallel model development at

    Get PDF
    Nearly 30 years after introducing the first computer model for weather forecasting, the Deutscher Wetterdienst (DWD) is developing the 4th generation of its numerical weather prediction (NWP) system. It consists of a global grid point model (GME) based on a triangular grid and a non-hydrostatic Lokal Modell (LM). The operational demand for running this new system is immense and can only be met by parallel computers. From the experience gained in developing earlier NWP models, several new problems had to be taken into account during the design phase of the system. Most important were portability (including efficieny of the programs on several computer architectures) and ease of code maintainability. Also the organization and administration of the work done by developers from different teams and institutions is more complex than it used to be. This paper describes the models and gives some performance results. The modular approach used for the design of the LM is explained and the effects on the development are discussed
    • …
    corecore