32,768 research outputs found

    General moments of the inverse real Wishart distribution and orthogonal Weingarten functions

    Full text link
    Let WW be a random positive definite symmetric matrix distributed according to a real Wishart distribution and let W−1=(Wij)i,jW^{-1}=(W^{ij})_{i,j} be its inverse matrix. We compute general moments E[Wk1k2Wk3k4...Wk2n−1k2n]\mathbb{E} [W^{k_1 k_2} W^{k_3 k_4} ... W^{k_{2n-1}k_{2n}}] explicitly. To do so, we employ the orthogonal Weingarten function, which was recently introduced in the study for Haar-distributed orthogonal matrices. As applications, we give formulas for moments of traces of a Wishart matrix and its inverse.Comment: 29 pages. The last version differs from the published version, but it includes Appendi

    Modeling Pressure-Ionization of Hydrogen in the Context of Astrophysics

    Get PDF
    The recent development of techniques for laser-driven shock compression of hydrogen has opened the door to the experimental determination of its behavior under conditions characteristic of stellar and planetary interiors. The new data probe the equation of state (EOS) of dense hydrogen in the complex regime of pressure ionization. The structure and evolution of dense astrophysical bodies depend on whether the pressure ionization of hydrogen occurs continuously or through a ``plasma phase transition'' (PPT) between a molecular state and a plasma state. For the first time, the new experiments constrain predictions for the PPT. We show here that the EOS model developed by Saumon and Chabrier can successfully account for the data, and we propose an experiment that should provide a definitive test of the predicted PPT of hydrogen. The usefulness of the chemical picture for computing astrophysical EOS and in modeling pressure ionization is discussed.Comment: 16 pages + 4 figures, to appear in High Pressure Researc

    Tear film thickness variations and the role of the tear meniscus

    Get PDF
    A mathematical model is developed to investigate the two-dimensional variations in the thickness of tear fluid deposited on the eye surface during a blink. Such variations can become greatly enhanced as the tears evaporate during the interblink period.\ud The four mechanisms considered are: i) the deposition of the tear film from the upper eyelid meniscus, ii) the flow of tear fluid from under the eyelid as it is retracted and from the lacrimal gland, iii) the flow of tear fluid around the eye within the meniscus and iv) the drainage of tear fluid into the canaliculi through the inferior and superior puncta.\ud There are two main insights from the modelling. First is that the amount of fluid within the tear meniscus is much greater than previously employed in models and this significantly changes the predicted distribution of tears. Secondly the uniformity of the tear film for a single blink is: i) primarily dictated by the storage in the meniscus, ii) quite sensitive to the speed of the blink and the ratio of the viscosity to the surface tension iii) less sensitive to the precise puncta behaviour, the flow under the eyelids or the specific distribution of fluid along the meniscus at the start of the blink. The modelling briefly examines the flow into the puncta which interact strongly with the meniscus and acts to control the meniscus volume. In addition it considers flow from the lacrimal glands which appears to occurs continue even during the interblink period when the eyelids are stationary

    Estimations for the Single Diffractive production of the Higgs boson at the Tevatron and the LHC

    Full text link
    The single diffractive production of the standard model Higgs boson is computed using the diffractive factorization formalism, taking into account a parametrization for the Pomeron structure function provided by the H1 Collaboration. We compute the cross sections at next-to-leading order accuracy for the gluon fusion process, which includes QCD and electroweak corrections. The gap survival probability () is also introduced to account for the rescattering corrections due to spectator particles present in the interaction, and to this end we compare two different models for the survival factor. The diffractive ratios are predicted for proton-proton collisions at the Tevatron and the LHC for the Higgs boson mass of MHM_H = 120 GeV. Therefore, our results provide updated estimations for the diffractive ratios of the single diffractive production of the Higgs boson in the Tevatron and LHC kinematical regimes.Comment: 20 pages, 6 figures, 3 table

    Hard-scattering factorization with heavy quarks: A general treatment

    Get PDF
    A detailed proof of hard scattering factorization is given with the inclusion of heavy quark masses. Although the proof is explicitly given for deep-inelastic scattering, the methods apply more generally The power-suppressed corrections to the factorization formula are uniformly suppressed by a power of \Lambda/Q, independently of the size of heavy quark masses, M, relative to Q.Comment: 52 pages. Version as published plus correction of misprint in Eq. (45

    Chiral dynamics in form factors, spectral-function sum rules, meson-meson scattering and semilocal duality

    Full text link
    In this work, we perform the one-loop calculation of the scalar and pseudoscalar form factors in the framework of U(3) chiral perturbation theory with explicit tree level exchanges of resonances. The meson-meson scattering calculation from Ref.[1] is extended as well. The spectral functions of the nonet scalar-scalar (SS) and pseudoscalar-pseudoscalar (PP) correlators are constructed by using the corresponding form factors. After fitting the unknown parameters to the scattering data, we discuss the resonance content of the resulting scattering amplitudes. We also study spectral-function sum rules in the SS-SS, PP-PP and SS-PP sectors as well as semi-local duality from scattering. The former relate the scalar and pseudoscalar spectra between themselves while the latter mainly connects the scalar spectrum with the vector one. Finally we investigate these items as a function of Nc for Nc > 3. All these results pose strong constraints on the scalar dynamics and spectroscopy that are discussed. They are successfully fulfilled by our meson-meson scattering amplitudes and spectral functions.Comment: 45 pages, 17 figures and 4 tables. To match the published version in PRD: a new paragraph is added in the Introduction and two new references are include

    Transverse momentum dependent parton distributions in a light-cone quark model

    Full text link
    The leading twist transverse momentum dependent parton distributions (TMDs) are studied in a light-cone description of the nucleon where the Fock expansion is truncated to consider only valence quarks. General analytic expressions are derived in terms of the six amplitudes needed to describe the three-quark sector of the nucleon light-cone wave function. Numerical calculations for the T-even TMDs are presented in a light-cone constituent quark model, and the role of the so-called pretzelosity is investigated to produce a nonspherical shape of the nucleon.Comment: references added and typos corrected; version to appear in Phys. Rev.

    Applying machine learning to the problem of choosing a heuristic to select the variable ordering for cylindrical algebraic decomposition

    Get PDF
    Cylindrical algebraic decomposition(CAD) is a key tool in computational algebraic geometry, particularly for quantifier elimination over real-closed fields. When using CAD, there is often a choice for the ordering placed on the variables. This can be important, with some problems infeasible with one variable ordering but easy with another. Machine learning is the process of fitting a computer model to a complex function based on properties learned from measured data. In this paper we use machine learning (specifically a support vector machine) to select between heuristics for choosing a variable ordering, outperforming each of the separate heuristics.Comment: 16 page

    The Voluntary Adjustment of Railroad Obligations

    Get PDF
    Automatic memory management techniques eliminate many programming errors that are both hard to find and to correct. However, these techniques are not yet used in embedded systems with hard realtime applications. The reason is that current methods for automatic memory management have a number of drawbacks. The two major ones are: (1) not being able to always guarantee short real-time deadlines and (2) using large amounts of extra memory. Memory is usually a scarce resource in embedded applications. In this paper we present a new technique, Real-Time Reference Counting (RTRC) that overcomes the current problems and makes automatic memory management attractive also for hard real-time applications. The main contribution of RTRC is that often all memory can be used to store live objects. This should be compared to a memory overhead of about 500% for garbage collectors based on copying techniques and about 50% for garbage collectors based on mark-and-sweep techniques

    Blaming Bill Gates AGAIN! Misuse, overuse and misunderstanding of performance data in sport

    Get PDF
    Recently in Sport, Education and Society, Williams and Manley (2014) argued against the heavy reliance on technology in professional Rugby Union and elite sport in general. In summary, technology is presented as an elitist, ‘gold standard’ villain that management and coaches use to exert control and by which players lose autonomy, identity, motivation, social interactions and expertise. In this article we suggest that the sociological interpretations and implications offered by Williams and Manley may be somewhat limited when viewed in isolation. In doing so, we identify some core methodological issues in Williams and Manley’s study and critically consider important arguments for utilising technology; notably, to inform coach decision making and generate player empowerment. Secondly, we present a different, yet perhaps equally concerning, practice-oriented interpretation of the same results but from alternative coaching and expertise literature. Accordingly, we suggest that Williams and Manley have perhaps raised their alarm prematurely, inappropriately and on somewhat shaky foundations. We also hope to stimulate others to consider contrary positions, or at least to think about this topic in greater detail. More specifically, we encourage coaches and academics to think carefully about what technology is employed, how and why, and then the means by which these decisions are discussed with and, preferably, sold to players. Certainly, technology can significantly enhance coach decision making and practice, while also helping players to optimise their focus, empowerment and independence in knowing how to achieve their personal and collective goals
    • …
    corecore