378 research outputs found

    Measurements of mixed-mode crack surface displacements and comparison with theory

    Get PDF
    A theoretical and an experimental technique is used to determine crack surface displacements under mixed-mode conditions. Crack surface displacements proved to be quite useful in mode 1 fracture analysis in that they are directly related to strain energy release rate and stress intensity factor. It is felt that similar relationships can be developed for the mixed-mode case. A boundary-integral method was developed for application to two-dimensional fracture mechanics problems. This technique was applied to the mixed-mode problem. A laser interferometry technique, for measurement of crack surface displacements under mixed-mode conditions, is presented. The experimental measurements are reported and the results of the two approaches are compared and discussed

    Development and application of an interferometric system for measuring crack displacements

    Get PDF
    The development of the first version of a minicomputer controlled system that converts the fringe pattern motion into a voltage output proportional to displacement is presented. Details of the instrument and the calibration tests are included

    Short fatigue crack behavior in notched 2024-T3 aluminum specimens

    Get PDF
    Single-edge, semi-circular notched specimens of Al 2024-T3, 2.3 mm thick, were cyclicly loaded at R-ratios of 0.5, 0.0, -1.0, and -2.0. The notch roots were periodically inspected using a replica technique which duplicates the bore surface. The replicas were examined under an optical microscope to determine the initiation of very short cracks and to monitor the growth of short cracks ranging in length from a few tens of microns to the specimen thickness. In addition to short crack growth measurements, the crack opening displacement (COD) was measured for surface cracks as short as 0.035 mm and for through-thickness cracks using the Interferometric Strain/Displacement Gage (ISDG), a laser-based optical technique. The growth rates of short cracks were faster than the long crack growth rates for R-ratios of -1.0 and -2.0. No significant difference between short and long crack growth rates was observed for R = 0.0. Short cracks had slower growth rates than long cracks for R = 0.5. The crack opening stresses measured for short cracks were smaller than those predicted for large cracks, with little difference appearing for positive R-ratios and large differences noted for negative R-ratios

    Parity Conservation in Supersymmetric Vector-Like Theories

    Get PDF
    We show that parity is conserved in vector-like supersymmetric theories, such as supersymmetric QCD with massive quarks with no cubic couplings among chiral multiplets, based on fermionic path-integrals, originally developed by Vafa and Witten. We also look into the effect of supersymmetric breaking through gluino masses, and see that the parity-conservation is intact also in this case. Our conclusion is valid, when only bosonic parity-breaking observable terms are considered in path-integrals like the original Vafa-Witten formulation.Comment: 14 pages, latex, no figures; replaced with corrections of exponent in old eq.(2.8), misleading expressions in (3.19), comments on fermionic parity-breaking terms, and some references adde

    Rigidly Supersymmetric Gauge Theories on Curved Superspace

    Full text link
    In this note we construct rigidly supersymmetric gauged sigma models and gauge theories on certain Einstein four-manifolds, and discuss constraints on these theories. In work elsewhere, it was recently shown that on some nontrivial Einstein four-manifolds such as AdS4_4, N=1 rigidly supersymmetric sigma models are constrained to have target spaces with exact K\"ahler forms. Similarly, in gauged sigma models and gauge theories, we find that supersymmetry imposes constraints on Fayet-Iliopoulos parameters, which have the effect of enforcing that K\"ahler forms on quotient spaces be exact. We also discuss general aspects of universality classes of gauged sigma models, as encoded by stacks, and also discuss affine bundle structures implicit in these constructions.Comment: 23 pages; references added; more discussion added; v4: typos fixe

    Combustion in thermonuclear supernova explosions

    Full text link
    Type Ia supernovae are associated with thermonuclear explosions of white dwarf stars. Combustion processes convert material in nuclear reactions and release the energy required to explode the stars. At the same time, they produce the radioactive species that power radiation and give rise to the formation of the observables. Therefore, the physical mechanism of the combustion processes, as reviewed here, is the key to understand these astrophysical events. Theory establishes two distinct modes of propagation for combustion fronts: subsonic deflagrations and supersonic detonations. Both are assumed to play an important role in thermonuclear supernovae. The physical nature and theoretical models of deflagrations and detonations are discussed together with numerical implementations. A particular challenge arises due to the wide range of spatial scales involved in these phenomena. Neither the combustion waves nor their interaction with fluid flow and instabilities can be directly resolved in simulations. Substantial modeling effort is required to consistently capture such effects and the corresponding techniques are discussed in detail. They form the basis of modern multidimensional hydrodynamical simulations of thermonuclear supernova explosions. The problem of deflagration-to-detonation transitions in thermonuclear supernova explosions is briefly mentioned.Comment: Author version of chapter for 'Handbook of Supernovae,' edited by A. Alsabti and P. Murdin, Springer. 24 pages, 4 figure

    Weak Decays Beyond Leading Logarithms

    Get PDF
    We review the present status of QCD corrections to weak decays beyond the leading logarithmic approximation including particle-antiparticle mixing and rare and CP violating decays. After presenting the basic formalism for these calculations we discuss in detail the effective hamiltonians for all decays for which the next-to-leading corrections are known. Subsequently, we present the phenomenological implications of these calculations. In particular we update the values of various parameters and we incorporate new information on m_t in view of the recent top quark discovery. One of the central issues in our review are the theoretical uncertainties related to renormalization scale ambiguities which are substantially reduced by including next-to-leading order corrections. The impact of this theoretical improvement on the determination of the Cabibbo-Kobayashi-Maskawa matrix is then illustrated in various cases.Comment: 229 pages, 32 PostScript figures (included); uses RevTeX, epsf.sty, rotate.sty, rmpbib.sty (included), times.sty (included; requires LaTeX 2e); complete PostScript version available at ftp://feynman.t30.physik.tu-muenchen.de/pub/preprints/tum-100-95.ps.gz or ftp://feynman.t30.physik.tu-muenchen.de/pub/preprints/tum-100-95.ps2.gz (scaled down and rotated version to print two pages on one sheet of paper

    Estimating the NIH Efficient Frontier

    Get PDF
    Background: The National Institutes of Health (NIH) is among the world’s largest investors in biomedical research, with a mandate to: “
lengthen life, and reduce the burdens of illness and disability.” Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions–one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. Methods and Findings: Using data from 1965 to 2007, we provide estimates of the NIH “efficient frontier”, the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL). The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current) or reduction in risk (22% to 35% vs. current) are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. Conclusions: Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent, repeatable, and expressly designed to reduce the burden of disease. By approaching funding decisions in a more analytical fashion, it may be possible to improve their ultimate outcomes while reducing unintended consequences
    • 

    corecore