1,772 research outputs found

    Chromospheric Activity of HAT-P-11: an Unusually Active Planet-Hosting K Star

    Get PDF
    Kepler photometry of the hot Neptune host star HAT-P-11 suggests that its spot latitude distribution is comparable to the Sun's near solar maximum. We search for evidence of an activity cycle in the CaII H & K chromospheric emission SS-index with archival Keck/HIRES spectra and observations from the echelle spectrograph on the ARC 3.5 m Telescope at APO. The chromospheric emission of HAT-P-11 is consistent with a 10\gtrsim 10 year activity cycle, which plateaued near maximum during the Kepler mission. In the cycle that we observed, the star seemed to spend more time near active maximum than minimum. We compare the logRHK\log R^\prime_{HK} normalized chromospheric emission index of HAT-P-11 with other stars. HAT-P-11 has unusually strong chromospheric emission compared to planet-hosting stars of similar effective temperature and rotation period, perhaps due to tides raised by its planet.Comment: 16 pages, 8 figures; accepted to the Astrophysical Journa

    Effects of Synaptic and Myelin Plasticity on Learning in a Network of Kuramoto Phase Oscillators

    Get PDF
    Models of learning typically focus on synaptic plasticity. However, learning is the result of both synaptic and myelin plasticity. Specifically, synaptic changes often co-occur and interact with myelin changes, leading to complex dynamic interactions between these processes. Here, we investigate the implications of these interactions for the coupling behavior of a system of Kuramoto oscillators. To that end, we construct a fully connected, one-dimensional ring network of phase oscillators whose coupling strength (reflecting synaptic strength) as well as conduction velocity (reflecting myelination) are each regulated by a Hebbian learning rule. We evaluate the behavior of the system in terms of structural (pairwise connection strength and conduction velocity) and functional connectivity (local and global synchronization behavior). We find that for conditions in which a system limited to synaptic plasticity develops two distinct clusters both structurally and functionally, additional adaptive myelination allows for functional communication across these structural clusters. Hence, dynamic conduction velocity permits the functional integration of structurally segregated clusters. Our results confirm that network states following learning may be different when myelin plasticity is considered in addition to synaptic plasticity, pointing towards the relevance of integrating both factors in computational models of learning.Comment: 39 pages, 15 figures This work is submitted in Chaos: An Interdisciplinary Journal of Nonlinear Scienc

    Supervised Learning in Multilayer Spiking Neural Networks

    Get PDF
    The current article introduces a supervised learning algorithm for multilayer spiking neural networks. The algorithm presented here overcomes some limitations of existing learning algorithms as it can be applied to neurons firing multiple spikes and it can in principle be applied to any linearisable neuron model. The algorithm is applied successfully to various benchmarks, such as the XOR problem and the Iris data set, as well as complex classifications problems. The simulations also show the flexibility of this supervised learning algorithm which permits different encodings of the spike timing patterns, including precise spike trains encoding.Comment: 38 pages, 4 figure

    Neuronal assembly dynamics in supervised and unsupervised learning scenarios

    Get PDF
    The dynamic formation of groups of neurons—neuronal assemblies—is believed to mediate cognitive phenomena at many levels, but their detailed operation and mechanisms of interaction are still to be uncovered. One hypothesis suggests that synchronized oscillations underpin their formation and functioning, with a focus on the temporal structure of neuronal signals. In this context, we investigate neuronal assembly dynamics in two complementary scenarios: the first, a supervised spike pattern classification task, in which noisy variations of a collection of spikes have to be correctly labeled; the second, an unsupervised, minimally cognitive evolutionary robotics tasks, in which an evolved agent has to cope with multiple, possibly conflicting, objectives. In both cases, the more traditional dynamical analysis of the system’s variables is paired with information-theoretic techniques in order to get a broader picture of the ongoing interactions with and within the network. The neural network model is inspired by the Kuramoto model of coupled phase oscillators and allows one to fine-tune the network synchronization dynamics and assembly configuration. The experiments explore the computational power, redundancy, and generalization capability of neuronal circuits, demonstrating that performance depends nonlinearly on the number of assemblies and neurons in the network and showing that the framework can be exploited to generate minimally cognitive behaviors, with dynamic assembly formation accounting for varying degrees of stimuli modulation of the sensorimotor interactions

    Handwritten digit recognition by bio-inspired hierarchical networks

    Full text link
    The human brain processes information showing learning and prediction abilities but the underlying neuronal mechanisms still remain unknown. Recently, many studies prove that neuronal networks are able of both generalizations and associations of sensory inputs. In this paper, following a set of neurophysiological evidences, we propose a learning framework with a strong biological plausibility that mimics prominent functions of cortical circuitries. We developed the Inductive Conceptual Network (ICN), that is a hierarchical bio-inspired network, able to learn invariant patterns by Variable-order Markov Models implemented in its nodes. The outputs of the top-most node of ICN hierarchy, representing the highest input generalization, allow for automatic classification of inputs. We found that the ICN clusterized MNIST images with an error of 5.73% and USPS images with an error of 12.56%

    A Bio-Logical Theory of Animal Learning

    Get PDF
    This article provides the foundation for a new predictive theory of animal learning that is based upon a simple logical model. The knowledge of experimental subjects at a given time is described using logical equations. These logical equations are then used to predict a subject’s response when presented with a known or a previously unknown situation. This new theory suc- cessfully anticipates phenomena that existing theories predict, as well as phenomena that they cannot. It provides a theoretical account for phenomena that are beyond the domain of existing models, such as extinction and the detection of novelty, from which “external inhibition” can be explained. Examples of the methods applied to make predictions are given using previously published results. The present theory proposes a new way to envision the minimal functions of the nervous system, and provides possible new insights into the way that brains ultimately create and use knowledge about the world

    Understanding person acquisition using an interactive activation and competition network

    No full text
    Face perception is one of the most developed visual skills that humans display, and recent work has attempted to examine the mechanisms involved in face perception through noting how neural networks achieve the same performance. The purpose of the present paper is to extend this approach to look not just at human face recognition, but also at human face acquisition. Experiment 1 presents empirical data to describe the acquisition over time of appropriate representations for newly encountered faces. These results are compared with those of Simulation 1, in which a modified IAC network capable of modelling the acquisition process is generated. Experiment 2 and Simulation 2 explore the mechanisms of learning further, and it is demonstrated that the acquisition of a set of associated new facts is easier than the acquisition of individual facts in isolation of one another. This is explained in terms of the advantage gained from additional inputs and mutual reinforcement of developing links within an interactive neural network system. <br/

    Discovery of an Unusual Dwarf Galaxy in the Outskirts of the Milky Way

    Get PDF
    In this Letter, we announce the discovery of a new dwarf galaxy, Leo T, in the Local Group. It was found as a stellar overdensity in the Sloan Digital Sky Survey Data Release 5 (SDSS DR5). The color-magnitude diagram of Leo T shows two well-defined features, which we interpret as a red giant branch and a sequence of young, massive stars. As judged from fits to the color-magnitude diagram, it lies at a distance of about 420 kpc and has an intermediate-age stellar population with a metallicity of [Fe/H]= -1.6, together with a young population of blue stars of age of 200 Myr. There is a compact cloud of neutral hydrogen with mass roughly 10^5 solar masses and radial velocity 35 km/s coincident with the object visible in the HIPASS channel maps. Leo T is the smallest, lowest luminosity galaxy found to date with recent star-formation. It appears to be a transition object similar to, but much lower luminosity than, the Phoenix dwarf.Comment: Ap J (Letters) in press, the subject of an SDSS press release toda

    Neuromorphic Detection of Vowel Representation Spaces

    Get PDF
    In this paper a layered architecture to spot and characterize vowel segments in running speech is presented. The detection process is based on neuromorphic principles, as is the use of Hebbian units in layers to implement lateral inhibition, band probability estimation and mutual exclusion. Results are presented showing how the association between the acoustic set of patterns and the phonologic set of symbols may be created. Possible applications of this methodology are to be found in speech event spotting, in the study of pathological voice and in speaker biometric characterization, among others

    Derivation of Hebb's rule

    Full text link
    On the basis of the general form for the energy needed to adapt the connection strengths of a network in which learning takes place, a local learning rule is found for the changes of the weights. This biologically realizable learning rule turns out to comply with Hebb's neuro-physiological postulate, but is not of the form of any of the learning rules proposed in the literature. It is shown that, if a finite set of the same patterns is presented over and over again to the network, the weights of the synapses converge to finite values. Furthermore, it is proved that the final values found in this biologically realizable limit are the same as those found via a mathematical approach to the problem of finding the weights of a partially connected neural network that can store a collection of patterns. The mathematical solution is obtained via a modified version of the so-called method of the pseudo-inverse, and has the inverse of a reduced correlation matrix, rather than the usual correlation matrix, as its basic ingredient. Thus, a biological network might realize the final results of the mathematician by the energetically economic rule for the adaption of the synapses found in this article.Comment: 29 pages, LaTeX, 3 figure
    corecore