5,775 research outputs found

    Preasymptotic multiscaling in the phase-ordering dynamics of the kinetic Ising model

    Full text link
    The evolution of the structure factor is studied during the phase-ordering dynamics of the kinetic Ising model with conserved order parameter. A preasymptotic multiscaling regime is found as in the solution of the Cahn-Hilliard-Cook equation, revealing that the late stage of phase-ordering is always approached through a crossover from multiscaling to standard scaling, independently from the nature of the microscopic dynamics.Comment: 11 pages, 3 figures, to be published in Europhys. Let

    Global adaptation in networks of selfish components: emergent associative memory at the system scale

    No full text
    In some circumstances complex adaptive systems composed of numerous self-interested agents can self-organise into structures that enhance global adaptation, efficiency or function. However, the general conditions for such an outcome are poorly understood and present a fundamental open question for domains as varied as ecology, sociology, economics, organismic biology and technological infrastructure design. In contrast, sufficient conditions for artificial neural networks to form structures that perform collective computational processes such as associative memory/recall, classification, generalisation and optimisation, are well-understood. Such global functions within a single agent or organism are not wholly surprising since the mechanisms (e.g. Hebbian learning) that create these neural organisations may be selected for this purpose, but agents in a multi-agent system have no obvious reason to adhere to such a structuring protocol or produce such global behaviours when acting from individual self-interest. However, Hebbian learning is actually a very simple and fully-distributed habituation or positive feedback principle. Here we show that when self-interested agents can modify how they are affected by other agents (e.g. when they can influence which other agents they interact with) then, in adapting these inter-agent relationships to maximise their own utility, they will necessarily alter them in a manner homologous with Hebbian learning. Multi-agent systems with adaptable relationships will thereby exhibit the same system-level behaviours as neural networks under Hebbian learning. For example, improved global efficiency in multi-agent systems can be explained by the inherent ability of associative memory to generalise by idealising stored patterns and/or creating new combinations of sub-patterns. Thus distributed multi-agent systems can spontaneously exhibit adaptive global behaviours in the same sense, and by the same mechanism, as the organisational principles familiar in connectionist models of organismic learning

    Bio-linguistic transition and Baldwin effect in an evolutionary naming-game model

    Full text link
    We examine an evolutionary naming-game model where communicating agents are equipped with an evolutionarily selected learning ability. Such a coupling of biological and linguistic ingredients results in an abrupt transition: upon a small change of a model control parameter a poorly communicating group of linguistically unskilled agents transforms into almost perfectly communicating group with large learning abilities. When learning ability is kept fixed, the transition appears to be continuous. Genetic imprinting of the learning abilities proceeds via Baldwin effect: initially unskilled communicating agents learn a language and that creates a niche in which there is an evolutionary pressure for the increase of learning ability.Our model suggests that when linguistic (or cultural) processes became intensive enough, a transition took place where both linguistic performance and biological endowment of our species experienced an abrupt change that perhaps triggered the rapid expansion of human civilization.Comment: 7 pages, minor changes, accepted in Int.J.Mod.Phys.C, proceedings of Max Born Symp. Wroclaw (Poland), Sept. 2007. Java applet is available at http://spin.amu.edu.pl/~lipowski/biolin.html or http://www.amu.edu.pl/~lipowski/biolin.htm

    Comparison of Fermi-LAT and CTA in the region between 10-100 GeV

    Full text link
    The past decade has seen a dramatic improvement in the quality of data available at both high (HE: 100 MeV to 100 GeV) and very high (VHE: 100 GeV to 100 TeV) gamma-ray energies. With three years of data from the Fermi Large Area Telescope (LAT) and deep pointed observations with arrays of Cherenkov telescope, continuous spectral coverage from 100 MeV to ∌10\sim10 TeV exists for the first time for the brightest gamma-ray sources. The Fermi-LAT is likely to continue for several years, resulting in significant improvements in high energy sensitivity. On the same timescale, the Cherenkov Telescope Array (CTA) will be constructed providing unprecedented VHE capabilities. The optimisation of CTA must take into account competition and complementarity with Fermi, in particularly in the overlapping energy range 10−-100 GeV. Here we compare the performance of Fermi-LAT and the current baseline CTA design for steady and transient, point-like and extended sources.Comment: Accepted for Publication in Astroparticle Physic

    Analysis of dropout learning regarded as ensemble learning

    Full text link
    Deep learning is the state-of-the-art in fields such as visual object recognition and speech recognition. This learning uses a large number of layers, huge number of units, and connections. Therefore, overfitting is a serious problem. To avoid this problem, dropout learning is proposed. Dropout learning neglects some inputs and hidden units in the learning process with a probability, p, and then, the neglected inputs and hidden units are combined with the learned network to express the final output. We find that the process of combining the neglected hidden units with the learned network can be regarded as ensemble learning, so we analyze dropout learning from this point of view.Comment: 9 pages, 8 figures, submitted to Conferenc

    Recurrent Fully Convolutional Neural Networks for Multi-slice MRI Cardiac Segmentation

    Full text link
    In cardiac magnetic resonance imaging, fully-automatic segmentation of the heart enables precise structural and functional measurements to be taken, e.g. from short-axis MR images of the left-ventricle. In this work we propose a recurrent fully-convolutional network (RFCN) that learns image representations from the full stack of 2D slices and has the ability to leverage inter-slice spatial dependences through internal memory units. RFCN combines anatomical detection and segmentation into a single architecture that is trained end-to-end thus significantly reducing computational time, simplifying the segmentation pipeline, and potentially enabling real-time applications. We report on an investigation of RFCN using two datasets, including the publicly available MICCAI 2009 Challenge dataset. Comparisons have been carried out between fully convolutional networks and deep restricted Boltzmann machines, including a recurrent version that leverages inter-slice spatial correlation. Our studies suggest that RFCN produces state-of-the-art results and can substantially improve the delineation of contours near the apex of the heart.Comment: MICCAI Workshop RAMBO 201
    • 

    corecore