358 research outputs found

    How to read probability distributions as statements about process

    Full text link
    Probability distributions can be read as simple expressions of information. Each continuous probability distribution describes how information changes with magnitude. Once one learns to read a probability distribution as a measurement scale of information, opportunities arise to understand the processes that generate the commonly observed patterns. Probability expressions may be parsed into four components: the dissipation of all information, except the preservation of average values, taken over the measurement scale that relates changes in observed values to changes in information, and the transformation from the underlying scale on which information dissipates to alternative scales on which probability pattern may be expressed. Information invariances set the commonly observed measurement scales and the relations between them. In particular, a measurement scale for information is defined by its invariance to specific transformations of underlying values into measurable outputs. Essentially all common distributions can be understood within this simple framework of information invariance and measurement scale.Comment: v2: added table of contents, adjusted section numbers v3: minor editing, updated referenc

    Improved cuckoo search for loss allocation in transmission line/ Nur Atiqah Abdul Rahman

    Get PDF
    Electricity market reformation often involes the process of liberalisation, deregulation and privatisation. Privatisation has often resulted in competition between market participants in order to reduce cost and increase efficiency. Researchers have gained interest to allocate the transmission loss in transmission line which will lead to fair allocation of cost among market participants. Thus this paper proposed a new technique called Improved Cuckoo Search (ICS) as an approach to allocate transmission loss in transmission line. This technique is an improvement from previous technique called Cuckoo Search (CS), where cauchy distribution based on mutation technique is used instead of Levy Flight for its searching operator. The technique has been tested with IEEE 30 bus system in normal condition and showed improvement in terms of computational time and accuracy. Comparison between Cuckoo Search (CS) and Genetic Algorithm (GA) are also presented in this paper

    Automatically designing more general mutation operators of evolutionary programming for groups of function classes using a hyper-heuristic

    Get PDF
    In this study we use Genetic Programming (GP) as an offline hyper-heuristic to evolve a mutation operator for Evolutionary Programming. This is done using the Gaussian and uniform distributions as the terminal set, and arithmetic operators as the function set. The mutation operators are automatically designed for a specific function class. The contribution of this paper is to show that a GP can not only automatically design a mutation operator for Evolutionary Programming (EP) on functions generated from a specific function class, but also can design more general mutation operators on functions generated from groups of function classes. In addition, the automatically designed mutation operators also show good performance on new functions generated from a specific function class or a group of function classes

    Advancing evolution of artifcial neural networks through behavioral adaptation

    Full text link
    Diese Dissertation betrifft das Lernen in künstlichen neuronalen Netzen und präsentiert einen neuen evolutionären Algorithmus (Network-Weight-based Evolutionary Algorithm, NWEA), der zusätzliche Mechanismen der Natur in Computational Evolution involviert. NWEA ist eine Lernstrategie, die Information über die Position des Individuums im Suchraum, seine Güte und ANN Topologie in dem Modifikationsmechanismus enthält. Der Grundgedanke von NWEA war eine Verhaltensadaptation neben der strukturellen Adaptation durchzuführen und damit die Verbindung zwischen Individuen und Umwelt zu ermöglichen. Der NWEA Modifikationsstrategie nutzt sowohl Genotyp als auch Phänotyp Information im Evolutionsprozess. Genotyp Information ist durch den Ausgabefehler des Netzes dargestellt. Phänotyp Information ist in der Komponente network weight integriert, die die Struktur des Netzes beschreibt und von der Gesamtzahl der verbogenen Schichten und der durchschnittlichen Anzahl der verbogenen Neuronen abhängig ist

    A Hamilton-Jacobi approach for front propagation in kinetic equations

    Get PDF
    In this paper we use the theory of viscosity solutions for Hamilton-Jacobi equations to study propagation phenomena in kinetic equations. We perform the hydrodynamic limit of some kinetic models thanks to an adapted WKB ansatz. Our models describe particles moving according to a velocity-jump process, and proliferating thanks to a reaction term of monostable type. The scattering operator is supposed to satisfy a maximum principle. When the velocity space is bounded, we show, under suitable hypotheses, that the phase converges towards the viscosity solution of some constrained Hamilton-Jacobi equation which effective Hamiltonian is obtained solving a suitable eigenvalue problem in the velocity space. In the case of unbounded velocities, the non-solvability of the spectral problem can lead to different behavior. In particular, a front acceleration phenomena can occur. Nevertheless, we expect that when the spectral problem is solvable one can extend the convergence result

    Mod-phi convergence I: Normality zones and precise deviations

    Full text link
    In this paper, we use the framework of mod-ϕ\phi convergence to prove precise large or moderate deviations for quite general sequences of real valued random variables (Xn)nN(X_{n})_{n \in \mathbb{N}}, which can be lattice or non-lattice distributed. We establish precise estimates of the fluctuations P[XntnB]P[X_{n} \in t_{n}B], instead of the usual estimates for the rate of exponential decay log(P[XntnB])\log( P[X_{n}\in t_{n}B]). Our approach provides us with a systematic way to characterise the normality zone, that is the zone in which the Gaussian approximation for the tails is still valid. Besides, the residue function measures the extent to which this approximation fails to hold at the edge of the normality zone. The first sections of the article are devoted to a proof of these abstract results and comparisons with existing results. We then propose new examples covered by this theory and coming from various areas of mathematics: classical probability theory, number theory (statistics of additive arithmetic functions), combinatorics (statistics of random permutations), random matrix theory (characteristic polynomials of random matrices in compact Lie groups), graph theory (number of subgraphs in a random Erd\H{o}s-R\'enyi graph), and non-commutative probability theory (asymptotics of random character values of symmetric groups). In particular, we complete our theory of precise deviations by a concrete method of cumulants and dependency graphs, which applies to many examples of sums of "weakly dependent" random variables. The large number as well as the variety of examples hint at a universality class for second order fluctuations.Comment: 103 pages. New (final) version: multiple small improvements ; a new section on mod-Gaussian convergence coming from the factorization of the generating function ; the multi-dimensional results have been moved to a forthcoming paper ; and the introduction has been reworke

    The Epic Story of Maximum Likelihood

    Full text link
    At a superficial level, the idea of maximum likelihood must be prehistoric: early hunters and gatherers may not have used the words ``method of maximum likelihood'' to describe their choice of where and how to hunt and gather, but it is hard to believe they would have been surprised if their method had been described in those terms. It seems a simple, even unassailable idea: Who would rise to argue in favor of a method of minimum likelihood, or even mediocre likelihood? And yet the mathematical history of the topic shows this ``simple idea'' is really anything but simple. Joseph Louis Lagrange, Daniel Bernoulli, Leonard Euler, Pierre Simon Laplace and Carl Friedrich Gauss are only some of those who explored the topic, not always in ways we would sanction today. In this article, that history is reviewed from back well before Fisher to the time of Lucien Le Cam's dissertation. In the process Fisher's unpublished 1930 characterization of conditions for the consistency and efficiency of maximum likelihood estimates is presented, and the mathematical basis of his three proofs discussed. In particular, Fisher's derivation of the information inequality is seen to be derived from his work on the analysis of variance, and his later approach via estimating functions was derived from Euler's Relation for homogeneous functions. The reaction to Fisher's work is reviewed, and some lessons drawn.Comment: Published in at http://dx.doi.org/10.1214/07-STS249 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Free Probability Theory

    Get PDF
    The workhop brought together leading experts, as well as promising young researchers, in areas related to recent developments in free probability theory. Some particular emphasis was on the relation of free probability with random matrix theory
    corecore