5,213,332 research outputs found

    Effective linear meson model

    Full text link
    The effective action of the linear meson model generates the mesonic n-point functions with all quantum effects included. Based on chiral symmetry and a systematic quark mass expansion we derive relations between meson masses and decay constants. The model ``predicts'' values for f_eta and f_eta' which are compatible with observation. This involves a large momentum dependent eta-eta' mixing angle which is different for the on--shell decays of the eta and the eta'. We also present relations for the masses of the 0^{++} octet. The parameters of the linear meson model are computed and related to cubic and quartic couplings among pseudoscalar and scalar mesons. We also discuss extensions for vector and axialvector fields. In a good approximation the exchange of these fields is responsible for the important nonminimal kinetic terms and the eta-eta' mixing encountered in the linear meson model.Comment: 79 pages, including 3 abstracts, 9 tables and 9 postscript figures, LaTeX, requires epsf.st

    Optimal linear Glauber model

    Full text link
    Contrary to the actual nonlinear Glauber model (NLGM), the linear Glauber model (LGM) is exactly solvable, although the detailed balance condition is not generally satisfied. This motivates us to address the issue of writing the transition rate (wjw_j) in a best possible linear form such that the mean squared error in satisfying the detailed balance condition is least. The advantage of this work is that, by studying the LGM analytically, we will be able to anticipate how the kinetic properties of an arbitrary Ising system depend on the temperature and the coupling constants. The analytical expressions for the optimal values of the parameters involved in the linear wjw_j are obtained using a simple Moore-Penrose pseudoinverse matrix. This approach is quite general, in principle applicable to any system and can reproduce the exact results for one dimensional Ising system. In the continuum limit, we get a linear time-dependent Ginzburg-Landau (TDGL) equation from the Glauber's microscopic model of non-conservative dynamics. We analyze the critical and dynamic properties of the model, and show that most of the important results obtained in different studies can be reproduced by our new mathematical approach. We will also show in this paper that the effect of magnetic field can easily be studied within our approach; in particular, we show that the inverse of relaxation time changes quadratically with (weak) magnetic field and that the fluctuation-dissipation theorem is valid for our model.Comment: 25 pages; final version; appeared in Journal of Statistical Physic

    Sparse Probit Linear Mixed Model

    Full text link
    Linear Mixed Models (LMMs) are important tools in statistical genetics. When used for feature selection, they allow to find a sparse set of genetic traits that best predict a continuous phenotype of interest, while simultaneously correcting for various confounding factors such as age, ethnicity and population structure. Formulated as models for linear regression, LMMs have been restricted to continuous phenotypes. We introduce the Sparse Probit Linear Mixed Model (Probit-LMM), where we generalize the LMM modeling paradigm to binary phenotypes. As a technical challenge, the model no longer possesses a closed-form likelihood function. In this paper, we present a scalable approximate inference algorithm that lets us fit the model to high-dimensional data sets. We show on three real-world examples from different domains that in the setup of binary labels, our algorithm leads to better prediction accuracies and also selects features which show less correlation with the confounding factors.Comment: Published version, 21 pages, 6 figure

    Model Checking Linear Logic Specifications

    Full text link
    The overall goal of this paper is to investigate the theoretical foundations of algorithmic verification techniques for first order linear logic specifications. The fragment of linear logic we consider in this paper is based on the linear logic programming language called LO enriched with universally quantified goal formulas. Although LO was originally introduced as a theoretical foundation for extensions of logic programming languages, it can also be viewed as a very general language to specify a wide range of infinite-state concurrent systems. Our approach is based on the relation between backward reachability and provability highlighted in our previous work on propositional LO programs. Following this line of research, we define here a general framework for the bottom-up evaluation of first order linear logic specifications. The evaluation procedure is based on an effective fixpoint operator working on a symbolic representation of infinite collections of first order linear logic formulas. The theory of well quasi-orderings can be used to provide sufficient conditions for the termination of the evaluation of non trivial fragments of first order linear logic.Comment: 53 pages, 12 figures "Under consideration for publication in Theory and Practice of Logic Programming

    A Linear/Producer/Consumer Model of Classical Linear Logic

    Get PDF
    This paper defines a new proof- and category-theoretic framework for classical linear logic that separates reasoning into one linear regime and two persistent regimes corresponding to ! and ?. The resulting linear/producer/consumer (LPC) logic puts the three classes of propositions on the same semantic footing, following Benton's linear/non-linear formulation of intuitionistic linear logic. Semantically, LPC corresponds to a system of three categories connected by adjunctions reflecting the linear/producer/consumer structure. The paper's metatheoretic results include admissibility theorems for the cut and duality rules, and a translation of the LPC logic into category theory. The work also presents several concrete instances of the LPC model.Comment: In Proceedings LINEARITY 2014, arXiv:1502.0441

    Model-independent rate control for intra-coding based on piecewise linear approximations

    Get PDF
    This paper proposes a rate control (RC) algorithm for intra-coded sequences (I-frames) within the context of block-based predictive transform coding that departs from using trained models to approximate the rate-distortion (R-D) characteristics of the video sequence. Our algorithm employs piecewise linear approximations of the rate-distortion (R-D) curve of a frame at the block-level. Specifically, it employs information about the rate and distortion of already compressed blocks within the current frame to linearly approximate the slope of the R-D curve of each block. The proposed algorithm is implemented in the High-Efficiency Video Coding (H.265/HEVC) standard and compared with its current RC algorithm, which is based on a trained model. Evaluations on a variety of intra-coded sequences show that the proposed RC algorithm not only attains the overall target bit rate more accurately than the RC algorithm used by H.265/HEVC algorithm but is also capable of encoding each I-frame at a more constant bit rate according to the overall bit budget

    Jeffreys-prior penalty, finiteness and shrinkage in binomial-response generalized linear models

    Get PDF
    Penalization of the likelihood by Jeffreys' invariant prior, or by a positive power thereof, is shown to produce finite-valued maximum penalized likelihood estimates in a broad class of binomial generalized linear models. The class of models includes logistic regression, where the Jeffreys-prior penalty is known additionally to reduce the asymptotic bias of the maximum likelihood estimator; and also models with other commonly used link functions such as probit and log-log. Shrinkage towards equiprobability across observations, relative to the maximum likelihood estimator, is established theoretically and is studied through illustrative examples. Some implications of finiteness and shrinkage for inference are discussed, particularly when inference is based on Wald-type procedures. A widely applicable procedure is developed for computation of maximum penalized likelihood estimates, by using repeated maximum likelihood fits with iteratively adjusted binomial responses and totals. These theoretical results and methods underpin the increasingly widespread use of reduced-bias and similarly penalized binomial regression models in many applied fields
    corecore