15 research outputs found

    Universal Lyapunov functions for non-linear reaction networks

    No full text
    In 1961, RĂ©nyi discovered a rich family of non-classical Lyapunov functions for kinetics of the Markov chains, or, what is the same, for the linear kinetic equations. This family was parameterized by convex functions on the positive semi-axis. After works of CsiszĂĄr and Morimoto, these functions became widely known as f-divergences or the CsiszĂĄr–Morimoto divergences. These Lyapunov functions are universal in the following sense: they depend only on the state of equilibrium, not on the kinetic parameters themselves. Despite many years of research, no such wide family of universal Lyapunov functions has been found for nonlinear reaction networks. For general non-linear networks with detailed or complex balance, the classical thermodynamics potentials remain the only universal Lyapunov functions. We constructed a rich family of new universal Lyapunov functions for any non-linear reaction network with detailed or complex balance. These functions are parameterized by compact subsets of the projective space. They are universal in the same sense: they depend only on the state of equilibrium and on the network structure, but not on the kinetic parameters themselves. The main elements and operations in the construction of the new Lyapunov functions are partial equilibria of reactions and convex envelopes of families of functions

    Basic model of purposeful kinesis

    No full text
    The notions of taxis and kinesis are introduced and used to describe two types of behaviour of an organism in non-uniform conditions: (i) Taxis means the guided movement to more favourable conditions; (ii) Kinesis is the non-directional change in space motion in response to the change of conditions. Migration and dispersal of animals has evolved under control of natural selection. In a simple formalisation, the strategy of dispersal should increase Darwinian fitness. We introduce new models of purposeful kinesis with diffusion coefficient dependent on fitness. The local and instant evaluation of Darwinian fitness is used, the reproduction coefficient. New models include one additional parameter, intensity of kinesis, and may be considered as the minimal models of purposeful kinesis. The properties of models are explored by a series of numerical experiments. It is demonstrated how kinesis could be beneficial for assimilation of patches of food or of periodic fluctuations. Kinesis based on local and instant estimations of fitness is not always beneficial: for species with the Allee effect it can delay invasion and spreading. It is proven that kinesis cannot modify stability of homogeneous positive steady states

    Beyond Navier–Stokes equations: capillarity of ideal gas

    Full text link
    The system of Navier–Stokes–Fourier equations is one of the most celebrated systems of equations in modern science. It describes dynamics of fluids in the limit when gradients of density, velocity and temperature are sufficiently small, and loses its applicability when the flux becomes so non-equilibrium that the changes of velocity, density or temperature on the length compatible with the mean free path are non-negligible. The question is: how to model such fluxes? This problem is still open. (Despite the fact that the first ‘final equations of motion’ modified for analysis of thermal creep in rarefied gas were proposed by Maxwell in 1879.) There are, at least, three possible answers: (i) use molecular dynamics with individual particles, (ii) use kinetic equations, like Boltzmann’s equation or (iii) find a new system of equations for description of fluid dynamics with better accounting of non-equilibrium effects. These three approaches work at different scales. We explore the third possibility using the recent findings of capillarity of internal layers in ideal gases and of saturation effect in dissipation (there is a limiting attenuation rate for very short waves in ideal gas and it cannot increase infinitely). One candidate equation is discussed in more detail, the Korteweg system proposed in 1901. The main ideas and approaches are illustrated by a kinetic system for which the problem of reduction of kinetics to fluid dynamics is analytically solvable

    Stability and stabilisation of the lattice Boltzmann method Magic steps and salvation operations

    Full text link
    We revisit the classical stability versus accuracy dilemma for the lattice Boltzmann methods (LBM). Our goal is a stable method of second-order accuracy for fluid dynamics based on the lattice Bhatnager–Gross–Krook method (LBGK). The LBGK scheme can be recognised as a discrete dynamical system generated by free-flight and entropic involution. In this framework the stability and accuracy analysis are more natural. We find the necessary and sufficient conditions for second-order accurate fluid dynamics modelling. In particular, it is proven that in order to guarantee second-order accuracy the distribution should belong to a distinguished surface – the invariant film (up to second-order in the time step). This surface is the trajectory of the (quasi)equilibrium distribution surface under free-flight. The main instability mechanisms are identified. The simplest recipes for stabilisation add no artificial dissipation (up to second-order) and provide second-order accuracy of the method. Two other prescriptions add some artificial dissipation locally and prevent the system from loss of positivity and local blow-up. Demonstration of the proposed stable LBGK schemes are provided by the numerical simulation of a 1D shock tube and the unsteady 2D-flow around a square-cylinder up to Reynolds number O(10000)

    Robust principal graphs for data approximation

    Get PDF
    Revealing hidden geometry and topology in noisy data sets is a challenging task. Elastic principal graph is a computationally efficient and flexible data approximator based on embedding a graph into the data space and minimizing the energy functional penalizing the deviation of graph nodes both from data points and from pluri-harmonic configuration (generalization of linearity). The structure of principal graph is learned from data by application of a topological grammar which in the simplest case leads to the construction of principal curves or trees. In order to more efficiently cope with noise and outliers, here we suggest using a trimmed data approximation term to increase the robustness of the method. The modification of the method that we suggest does not affect either computational efficiency or general convergence properties of the original elastic graph method. The trimmed elastic energy functional remains a Lyapunov function for the optimization algorithm. On several examples of complex data distributions we demonstrate how the robust principal graphs learn the global data structure and show the advantage of using the trimmed data approximation term for the construction of principal graphs and other popular data approximators

    Nonequilibrium entropy limiters in lattice Boltzmann methods

    Full text link
    We construct a system of nonequilibrium entropy limiters for the lattice Boltzmann methods (LBM). These limiters erase spurious oscillations without blurring of shocks, and do not affect smooth solutions. In general, they do the same work for LBM as flux limiters do for finite differences, finite volumes and finite elements methods, but for LBM the main idea behind the construction of nonequilibrium entropy limiter schemes is to transform a field of a scalar quantity — nonequilibrium entropy. There are two families of limiters: (i) based on restriction of nonequilibrium entropy (entropy “trimming”) and (ii) based on filtering of nonequilibrium entropy (entropy filtering). The physical properties of LBM provide some additional benefits: the control of entropy production and accurate estimation of introduced artificial dissipation are possible. The constructed limiters are tested on classical numerical examples: 1D athermal shock tubes with an initial density ratio 1:2 and the 2D lid-driven cavity for Reynolds numbers View the MathML source between 2000 and 7500 on a coarse 100×100 grid. All limiter constructions are applicable both for entropic and for non-entropic equilibria

    The unreasonable effectiveness of small neural ensembles in high-dimensional brain.

    Get PDF
    Complexity is an indisputable, well-known, and broadly accepted feature of the brain. Despite the apparently obvious and widely-spread consensus on the brain complexity, sprouts of the single neuron revolution emerged in neuroscience in the 1970s. They brought many unexpected discoveries, including grandmother or concept cells and sparse coding of information in the brain. In machine learning for a long time, the famous curse of dimensionality seemed to be an unsolvable problem. Nevertheless, the idea of the blessing of dimensionality becomes gradually more and more popular. Ensembles of non-interacting or weakly interacting simple units prove to be an effective tool for solving essentially multidimensional and apparently incomprehensible problems. This approach is especially useful for one-shot (non-iterative) correction of errors in large legacy artificial intelligence systems and when the complete re-training is impossible or too expensive. These simplicity revolutions in the era of complexity have deep fundamental reasons grounded in geometry of multidimensional data spaces. To explore and understand these reasons we revisit the background ideas of statistical physics. In the course of the 20th century they were developed into the concentration of measure theory. The Gibbs equivalence of ensembles with further generalizations shows that the data in high-dimensional spaces are concentrated near shells of smaller dimension. New stochastic separation theorems reveal the fine structure of the data clouds. We review and analyse biological, physical, and mathematical problems at the core of the fundamental question: how can high-dimensional brain organise reliable and fast learning in high-dimensional world of data by simple tools? To meet this challenge, we outline and setup a framework based on statistical physics of data. Two critical applications are reviewed to exemplify the approach: one-shot correction of errors in intellectual systems and emergence of static and associative memories in ensembles of single neurons. Error correctors should be simple; not damage the existing skills of the system; allow fast non-iterative learning and correction of new mistakes without destroying the previous fixes. All these demands can be satisfied by new tools based on the concentration of measure phenomena and stochastic separation theory. We show how a simple enough functional neuronal model is capable of explaining: i) the extreme selectivity of single neurons to the information content of high-dimensional data, ii) simultaneous separation of several uncorrelated informational items from a large set of stimuli, and iii) dynamic learning of new items by associating them with already "known" ones. These results constitute a basis for organisation of complex memories in ensembles of single neurons

    One-trial correction of legacy AI systems and stochastic separation theorems

    No full text
    We consider the problem of efficient “on the fly” tuning of existing, or legacy, Artificial Intelligence (AI) systems. The legacy AI systems are allowed to be of arbitrary class, albeit the data they are using for computing interim or final decision responses should posses an underlying structure of a high-dimensional topological real vector space. The tuning method that we propose enables dealing with errors without the need to re-train the system. Instead of re-training a simple cascade of perceptron nodes is added to the legacy system. The added cascade modulates the AI legacy system’s decisions. If applied repeatedly, the process results in a network of modulating rules “dressing up” and improving performance of existing AI systems. Mathematical rationale behind the method is based on the fundamental property of measure concentration in high dimensional spaces. The method is illustrated with an example of fine-tuning a deep convolutional network that has been pre-trained to detect pedestrians in images

    Knowledge Transfer Between Artificial Intelligence Systems

    No full text
    We consider the fundamental question: how a legacy “student” Artificial Intelligent (AI) system could learn from a legacy “teacher” AI system or a human expert without re-training and, most importantly, without requiring significant computational resources. Here “learning” is broadly understood as an ability of one system to mimic responses of the other to an incoming stimulation and vice-versa. We call such learning an Artificial Intelligence knowledge transfer. We show that if internal variables of the “student” Artificial Intelligent system have the structure of an n-dimensional topological vector space and n is sufficiently high then, with probability close to one, the required knowledge transfer can be implemented by simple cascades of linear functionals. In particular, for n sufficiently large, with probability close to one, the “student” system can successfully and non-iteratively learn k â‰Ș n new examples from the “teacher” (or correct the same number of mistakes) at the cost of two additional inner products. The concept is illustrated with an example of knowledge transfer from one pre-trained convolutional neural network to another

    High-Dimensional Brain: A Tool for Encoding and Rapid Learning of Memories by Single Neurons

    No full text
    Codifying memories is one of the fundamental problems of modern Neuroscience. The functional mechanisms behind this phenomenon remain largely unknown. Experimental evidence suggests that some of the memory functions are performed by stratified brain structures such as the hippocampus. In this particular case, single neurons in the CA1 region receive a highly multidimensional input from the CA3 area, which is a hub for information processing. We thus assess the implication of the abundance of neuronal signalling routes converging onto single cells on the information processing. We show that single neurons can selectively detect and learn arbitrary information items, given that they operate in high dimensions. The argument is based on stochastic separation theorems and the concentration of measure phenomena. We demonstrate that a simple enough functional neuronal model is capable of explaining: (i) the extreme selectivity of single neurons to the information content, (ii) simultaneous separation of several uncorrelated stimuli or informational items from a large set, and (iii) dynamic learning of new items by associating them with already "known" ones. These results constitute a basis for organization of complex memories in ensembles of single neurons. Moreover, they show that no a priori assumptions on the structural organization of neuronal ensembles are necessary for explaining basic concepts of static and dynamic memories
    corecore