959 research outputs found

    Genetic analysis of immunological traits in tilapia

    Get PDF
    The immunological response to handling stress of four tilapia species is evaluated.Polymorphism is examined in genes known to influence immune response in fish

    Optimal prefix codes for pairs of geometrically-distributed random variables

    Full text link
    Optimal prefix codes are studied for pairs of independent, integer-valued symbols emitted by a source with a geometric probability distribution of parameter qq, 0<q<10{<}q{<}1. By encoding pairs of symbols, it is possible to reduce the redundancy penalty of symbol-by-symbol encoding, while preserving the simplicity of the encoding and decoding procedures typical of Golomb codes and their variants. It is shown that optimal codes for these so-called two-dimensional geometric distributions are \emph{singular}, in the sense that a prefix code that is optimal for one value of the parameter qq cannot be optimal for any other value of qq. This is in sharp contrast to the one-dimensional case, where codes are optimal for positive-length intervals of the parameter qq. Thus, in the two-dimensional case, it is infeasible to give a compact characterization of optimal codes for all values of the parameter qq, as was done in the one-dimensional case. Instead, optimal codes are characterized for a discrete sequence of values of qq that provide good coverage of the unit interval. Specifically, optimal prefix codes are described for q=2−1/kq=2^{-1/k} (k≥1k\ge 1), covering the range q≥1/2q\ge 1/2, and q=2−kq=2^{-k} (k>1k>1), covering the range q<1/2q<1/2. The described codes produce the expected reduction in redundancy with respect to the one-dimensional case, while maintaining low complexity coding operations.Comment: To appear in IEEE Transactions on Information Theor

    Separation of Scales and a Thermodynamic Description of Feature Learning in Some CNNs

    Full text link
    Deep neural networks (DNNs) are powerful tools for compressing and distilling information. Their scale and complexity, often involving billions of inter-dependent parameters, render direct microscopic analysis difficult. Under such circumstances, a common strategy is to identify slow variables that average the erratic behavior of the fast microscopic variables. Here, we identify a similar separation of scales occurring in fully trained finitely over-parameterized deep convolutional neural networks (CNNs) and fully connected networks (FCNs). Specifically, we show that DNN layers couple only through the second moment (kernels) of their activations and pre-activations. Moreover, the latter fluctuates in a nearly Gaussian manner. For infinite width DNNs, these kernels are inert, while for finite ones they adapt to the data and yield a tractable data-aware Gaussian Process. The resulting thermodynamic theory of deep learning yields accurate predictions in various settings. In addition, it provides new ways of analyzing and understanding DNNs in general
    • …
    corecore