199 research outputs found

    Large Developing Axonal Arbors Using a Distributed and Locally-Reprogrammable Address-Event Receiver

    Get PDF

    Naturally Rehearsing Passwords

    Full text link
    We introduce quantitative usability and security models to guide the design of password management schemes --- systematic strategies to help users create and remember multiple passwords. In the same way that security proofs in cryptography are based on complexity-theoretic assumptions (e.g., hardness of factoring and discrete logarithm), we quantify usability by introducing usability assumptions. In particular, password management relies on assumptions about human memory, e.g., that a user who follows a particular rehearsal schedule will successfully maintain the corresponding memory. These assumptions are informed by research in cognitive science and validated through empirical studies. Given rehearsal requirements and a user's visitation schedule for each account, we use the total number of extra rehearsals that the user would have to do to remember all of his passwords as a measure of the usability of the password scheme. Our usability model leads us to a key observation: password reuse benefits users not only by reducing the number of passwords that the user has to memorize, but more importantly by increasing the natural rehearsal rate for each password. We also present a security model which accounts for the complexity of password management with multiple accounts and associated threats, including online, offline, and plaintext password leak attacks. Observing that current password management schemes are either insecure or unusable, we present Shared Cues--- a new scheme in which the underlying secret is strategically shared across accounts to ensure that most rehearsal requirements are satisfied naturally while simultaneously providing strong security. The construction uses the Chinese Remainder Theorem to achieve these competing goals

    Optimal learning rules for familiarity detection

    Get PDF
    It has been suggested that the mammalian memory system has both familiarity and recollection components. Recently, a high-capacity network to store familiarity has been proposed. Here we derive analytically the optimal learning rule for such a familiarity memory using a signalto- noise ratio analysis. We find that in the limit of large networks the covariance rule, known to be the optimal local, linear learning rule for pattern association, is also the optimal learning rule for familiarity discrimination. The capacity is independent of the sparseness of the patterns, as long as the patterns have a fixed number of bits set. The corresponding information capacity is 0.057 bits per synapse, less than typically found for associative networks

    An associative memory of Hodgkin-Huxley neuron networks with Willshaw-type synaptic couplings

    Full text link
    An associative memory has been discussed of neural networks consisting of spiking N (=100) Hodgkin-Huxley (HH) neurons with time-delayed couplings, which memorize P patterns in their synaptic weights. In addition to excitatory synapses whose strengths are modified after the Willshaw-type learning rule with the 0/1 code for quiescent/active states, the network includes uniform inhibitory synapses which are introduced to reduce cross-talk noises. Our simulations of the HH neuron network for the noise-free state have shown to yield a fairly good performance with the storage capacity of αc=Pmax/N∼0.4−2.4\alpha_c = P_{\rm max}/N \sim 0.4 - 2.4 for the low neuron activity of f∼0.04−0.10f \sim 0.04-0.10. This storage capacity of our temporal-code network is comparable to that of the rate-code model with the Willshaw-type synapses. Our HH neuron network is realized not to be vulnerable to the distribution of time delays in couplings. The variability of interspace interval (ISI) of output spike trains in the process of retrieving stored patterns is also discussed.Comment: 15 pages, 3 figures, changed Titl

    Memory, modelling and Marr:a commentary on Marr (1971) 'Simple memory: a theory of archicortex'

    Get PDF
    David Marr's theory of the archicortex, a brain structure now more commonly known as the hippocampus and hippocampal formation, is an epochal contribution to theoretical neuroscience. Addressing the problem of how information about 10 000 events could be stored in the archicortex during the day so that they can be retrieved using partial information and then transferred to the neocortex overnight, the paper presages a whole wealth of later empirical and theoretical work, proving impressively prescient. Despite this impending success, Marr later apparently grew dissatisfied with this style of modelling, but he went on to make seminal suggestions that continue to resonate loudly throughout the field of theoretical neuroscience. We describe Marr's theory of the archicortex and his theory of theories, setting them into their original and a contemporary context, and assessing their impact. This commentary was written to celebrate the 350th anniversary of the journal Philosophical Transactions of the Royal Society
    • …
    corecore