149,313 research outputs found

    Combined local search strategy for learning in networks of binary synapses

    Full text link
    Learning in networks of binary synapses is known to be an NP-complete problem. A combined stochastic local search strategy in the synaptic weight space is constructed to further improve the learning performance of a single random walker. We apply two correlated random walkers guided by their Hamming distance and associated energy costs (the number of unlearned patterns) to learn a same large set of patterns. Each walker first learns a small part of the whole pattern set (partially different for both walkers but with the same amount of patterns) and then both walkers explore their respective weight spaces cooperatively to find a solution to classify the whole pattern set correctly. The desired solutions locate at the common parts of weight spaces explored by these two walkers. The efficiency of this combined strategy is supported by our extensive numerical simulations and the typical Hamming distance as well as energy cost is estimated by an annealed computation.Comment: 7 pages, 4 figures, figures and references adde

    Binary Particle Swarm Optimization based Biclustering of Web usage Data

    Full text link
    Web mining is the nontrivial process to discover valid, novel, potentially useful knowledge from web data using the data mining techniques or methods. It may give information that is useful for improving the services offered by web portals and information access and retrieval tools. With the rapid development of biclustering, more researchers have applied the biclustering technique to different fields in recent years. When biclustering approach is applied to the web usage data it automatically captures the hidden browsing patterns from it in the form of biclusters. In this work, swarm intelligent technique is combined with biclustering approach to propose an algorithm called Binary Particle Swarm Optimization (BPSO) based Biclustering for Web Usage Data. The main objective of this algorithm is to retrieve the global optimal bicluster from the web usage data. These biclusters contain relationships between web users and web pages which are useful for the E-Commerce applications like web advertising and marketing. Experiments are conducted on real dataset to prove the efficiency of the proposed algorithms

    Optimal learning rules for discrete synapses

    Get PDF
    There is evidence that biological synapses have a limited number of discrete weight states. Memory storage with such synapses behaves quite differently from synapses with unbounded, continuous weights, as old memories are automatically overwritten by new memories. Consequently, there has been substantial discussion about how this affects learning and storage capacity. In this paper, we calculate the storage capacity of discrete, bounded synapses in terms of Shannon information. We use this to optimize the learning rules and investigate how the maximum information capacity depends on the number of synapses, the number of synaptic states, and the coding sparseness. Below a certain critical number of synapses per neuron (comparable to numbers found in biology), we find that storage is similar to unbounded, continuous synapses. Hence, discrete synapses do not necessarily have lower storage capacity

    Achieving Maximum Distance Separable Private Information Retrieval Capacity With Linear Codes

    Get PDF
    We propose three private information retrieval (PIR) protocols for distributed storage systems (DSSs) where data is stored using an arbitrary linear code. The first two protocols, named Protocol 1 and Protocol 2, achieve privacy for the scenario with noncolluding nodes. Protocol 1 requires a file size that is exponential in the number of files in the system, while Protocol 2 requires a file size that is independent of the number of files and is hence simpler. We prove that, for certain linear codes, Protocol 1 achieves the maximum distance separable (MDS) PIR capacity, i.e., the maximum PIR rate (the ratio of the amount of retrieved stored data per unit of downloaded data) for a DSS that uses an MDS code to store any given (finite and infinite) number of files, and Protocol 2 achieves the asymptotic MDS-PIR capacity (with infinitely large number of files in the DSS). In particular, we provide a necessary and a sufficient condition for a code to achieve the MDS-PIR capacity with Protocols 1 and 2 and prove that cyclic codes, Reed-Muller (RM) codes, and a class of distance-optimal local reconstruction codes achieve both the finite MDS-PIR capacity (i.e., with any given number of files) and the asymptotic MDS-PIR capacity with Protocols 1 and 2, respectively. Furthermore, we present a third protocol, Protocol 3, for the scenario with multiple colluding nodes, which can be seen as an improvement of a protocol recently introduced by Freij-Hollanti et al.. Similar to the noncolluding case, we provide a necessary and a sufficient condition to achieve the maximum possible PIR rate of Protocol 3. Moreover, we provide a particular class of codes that is suitable for this protocol and show that RM codes achieve the maximum possible PIR rate for the protocol. For all three protocols, we present an algorithm to optimize their PIR rates.Comment: This work is the extension of the work done in arXiv:1612.07084v2. The current version introduces further refinement to the manuscript. Current version will appear in the IEEE Transactions on Information Theor

    A three-threshold learning rule approaches the maximal capacity of recurrent neural networks

    Get PDF
    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model has a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.Comment: 24 pages, 10 figures, to be published in PLOS Computational Biolog

    Codeword stabilized quantum codes: algorithm and structure

    Full text link
    The codeword stabilized ("CWS") quantum codes formalism presents a unifying approach to both additive and nonadditive quantum error-correcting codes (arXiv:0708.1021). This formalism reduces the problem of constructing such quantum codes to finding a binary classical code correcting an error pattern induced by a graph state. Finding such a classical code can be very difficult. Here, we consider an algorithm which maps the search for CWS codes to a problem of identifying maximum cliques in a graph. While solving this problem is in general very hard, we prove three structure theorems which reduce the search space, specifying certain admissible and optimal ((n,K,d)) additive codes. In particular, we find there does not exist any ((7,3,3)) CWS code though the linear programming bound does not rule it out. The complexity of the CWS search algorithm is compared with the contrasting method introduced by Aggarwal and Calderbank (arXiv:cs/0610159).Comment: 11 pages, 1 figur

    Neuron as a reward-modulated combinatorial switch and a model of learning behavior

    Full text link
    This paper proposes a neuronal circuitry layout and synaptic plasticity principles that allow the (pyramidal) neuron to act as a "combinatorial switch". Namely, the neuron learns to be more prone to generate spikes given those combinations of firing input neurons for which a previous spiking of the neuron had been followed by a positive global reward signal. The reward signal may be mediated by certain modulatory hormones or neurotransmitters, e.g., the dopamine. More generally, a trial-and-error learning paradigm is suggested in which a global reward signal triggers long-term enhancement or weakening of a neuron's spiking response to the preceding neuronal input firing pattern. Thus, rewards provide a feedback pathway that informs neurons whether their spiking was beneficial or detrimental for a particular input combination. The neuron's ability to discern specific combinations of firing input neurons is achieved through a random or predetermined spatial distribution of input synapses on dendrites that creates synaptic clusters that represent various permutations of input neurons. The corresponding dendritic segments, or the enclosed individual spines, are capable of being particularly excited, due to local sigmoidal thresholding involving voltage-gated channel conductances, if the segment's excitatory and absence of inhibitory inputs are temporally coincident. Such nonlinear excitation corresponds to a particular firing combination of input neurons, and it is posited that the excitation strength encodes the combinatorial memory and is regulated by long-term plasticity mechanisms. It is also suggested that the spine calcium influx that may result from the spatiotemporal synaptic input coincidence may cause the spine head actin filaments to undergo mechanical (muscle-like) contraction, with the ensuing cytoskeletal deformation transmitted to the axon initial segment where it may...Comment: Version 5: added computer code in the ancillary files sectio
    • …
    corecore