287,557 research outputs found

    Entropy and Entropy Production in Some Applications

    Full text link
    By using entropy and entropy production, we calculate the steady flux of some phenomena. The method we use is a competition method, SS/τ+σ=maximumS_S/\tau+\sigma={\it maximum}, where SSS_S is system entropy, σ\sigma is entropy production and τ\tau is microscopic interaction time. System entropy is calculated from the equilibrium state by studying the flux fluctuations. The phenomena we study include ionic conduction, atomic diffusion, thermal conduction and viscosity of a dilute gas

    Generalized maximum entropy (GME) estimator: formulation and a monte carlo study

    Get PDF
    The origin of entropy dates back to 19th century. In 1948, the entropy concept as a measure of uncertainty was developed by Shannon. A decade after in 1957, Jaynes formulated Shannon’s entropy as a method for estimation and inference particularly for ill-posed problems by proposing the so called Maximum Entropy (ME) principle. More recently, Golan et al. (1996) developed the Generalized Maximum Entropy (GME) estimator and started a new discussion in econometrics. This paper is divided into two parts. The first part considers the formulation of this new technique (GME). Second, by Monte Carlo simulations the estimation results of GME will be discussed in the context of non-normal disturbances.Entropy, Maximum Entropy, ME, Generalized Maximum Entropy, GME, Monte Carlo Experiment, Shannon’s Entropy, Non-normal disturbances

    Updating Probabilities

    Get PDF
    We show that Skilling's method of induction leads to a unique general theory of inductive inference, the method of Maximum relative Entropy (ME). The main tool for updating probabilities is the logarithmic relative entropy; other entropies such as those of Renyi or Tsallis are ruled out. We also show that Bayes updating is a special case of ME updating and thus, that the two are completely compatible.Comment: Presented at MaxEnt 2006, the 26th International Workshop on Bayesian Inference and Maximum Entropy Methods (July 8-13, 2006, Paris, France

    On the maximum entropy principle and the minimization of the Fisher information in Tsallis statistics

    Full text link
    We give a new proof of the theorems on the maximum entropy principle in Tsallis statistics. That is, we show that the qq-canonical distribution attains the maximum value of the Tsallis entropy, subject to the constraint on the qq-expectation value and the qq-Gaussian distribution attains the maximum value of the Tsallis entropy, subject to the constraint on the qq-variance, as applications of the nonnegativity of the Tsallis relative entropy, without using the Lagrange multipliers method. In addition, we define a qq-Fisher information and then prove a qq-Cram\'er-Rao inequality that the qq-Gaussian distribution with special qq-variances attains the minimum value of the qq-Fisher information

    Discontinuities in the Maximum-Entropy Inference

    Full text link
    We revisit the maximum-entropy inference of the state of a finite-level quantum system under linear constraints. The constraints are specified by the expected values of a set of fixed observables. We point out the existence of discontinuities in this inference method. This is a pure quantum phenomenon since the maximum-entropy inference is continuous for mutually commuting observables. The question arises why some sets of observables are distinguished by a discontinuity in an inference method which is still discussed as a universal inference method. In this paper we make an example of a discontinuity and we explain a characterization of the discontinuities in terms of the openness of the (restricted) linear map that assigns expected values to states.Comment: 8 pages, 3 figures, 32nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, Garching, Germany, 15-20 July 201
    • 

    corecore