597 research outputs found

    Neural networks, error-correcting codes, and polynomials over the binary n-cube

    Get PDF
    Several ways of relating the concept of error-correcting codes to the concept of neural networks are presented. Performing maximum-likelihood decoding in a linear block error-correcting code is shown to be equivalent to finding a global maximum of the energy function of a certain neural network. Given a linear block code, a neural network can be constructed in such a way that every codeword corresponds to a local maximum. The connection between maximization of polynomials over the n-cube and error-correcting codes is also investigated; the results suggest that decoding techniques can be a useful tool for solving such maximization problems. The results are generalized to both nonbinary and nonlinear codes

    Self-identification

    Get PDF
    Here, I first analyze gender identity qua gender self-ascription and offer a theory of the psychological states underpinning gender self-ascriptions, which I call a form of ‘self-identification’. I hold gender self-identification consists of a gender self-concept, which itself consists of a belief or assumption in a context, and sometimes involves a gender role ideal, which consists of an individual’s expectations and standards for how to perform a gender role. Second, I defend my view from an objection to similar views like it, which amounts to the claim that they cannot explain nonbinary gender identities, by showing how my account explains various nonbinary gender identities in psychological terms. Finally, I show how my view can begin stabilizing the construct of gender identity in neuroscience by helping researchers foster agreement on how to define gender identity and what methods are best to study it

    Neural Distributed Autoassociative Memories: A Survey

    Full text link
    Introduction. Neural network models of autoassociative, distributed memory allow storage and retrieval of many items (vectors) where the number of stored items can exceed the vector dimension (the number of neurons in the network). This opens the possibility of a sublinear time search (in the number of stored items) for approximate nearest neighbors among vectors of high dimension. The purpose of this paper is to review models of autoassociative, distributed memory that can be naturally implemented by neural networks (mainly with local learning rules and iterative dynamics based on information locally available to neurons). Scope. The survey is focused mainly on the networks of Hopfield, Willshaw and Potts, that have connections between pairs of neurons and operate on sparse binary vectors. We discuss not only autoassociative memory, but also the generalization properties of these networks. We also consider neural networks with higher-order connections and networks with a bipartite graph structure for non-binary data with linear constraints. Conclusions. In conclusion we discuss the relations to similarity search, advantages and drawbacks of these techniques, and topics for further research. An interesting and still not completely resolved question is whether neural autoassociative memories can search for approximate nearest neighbors faster than other index structures for similarity search, in particular for the case of very high dimensional vectors.Comment: 31 page

    Correlating matched-filter model for analysis and optimisation of neural networks

    Get PDF
    A new formalism is described for modelling neural networks by means of which a clear physical understanding of the network behaviour can be gained. In essence, the neural net is represented by an equivalent network of matched filters which is then analysed by standard correlation techniques. The procedure is demonstrated on the synchronous Little-Hopfield network. It is shown how the ability of this network to discriminate between stored binary, bipolar codes is optimised if the stored codes are chosen to be orthogonal. However, such a choice will not often be possible and so a new neural network architecture is proposed which enables the same discrimination to be obtained for arbitrary stored codes. The most efficient convergence of the synchronous Little-Hopfield net is obtained when the neurons are connected to themselves with a weight equal to the number of stored codes. The processing gain is presented for this case. The paper goes on to show how this modelling technique can be extended to analyse the behaviour of both hard and soft neural threshold responses and a novel time-dependent threshold response is described

    Correlating matched-filter model for analysis and optimisation of neural networks

    Get PDF
    A new formalism is described for modelling neural networks by means of which a clear physical understanding of the network behaviour can be gained. In essence, the neural net is represented by an equivalent network of matched filters which is then analysed by standard correlation techniques. The procedure is demonstrated on the synchronous Little-Hopfield network. It is shown how the ability of this network to discriminate between stored binary, bipolar codes is optimised if the stored codes are chosen to be orthogonal. However, such a choice will not often be possible and so a new neural network architecture is proposed which enables the same discrimination to be obtained for arbitrary stored codes. The most efficient convergence of the synchronous Little-Hopfield net is obtained when the neurons are connected to themselves with a weight equal to the number of stored codes. The processing gain is presented for this case. The paper goes on to show how this modelling technique can be extended to analyse the behaviour of both hard and soft neural threshold responses and a novel time-dependent threshold response is described
    corecore