39 research outputs found

    Latent Attractor Selection in the Presence of Irrelevant

    No full text
    Latent attractor networks are recurrent neural networks with weak attractors that bias the network's response to external stimuli but never fully manifest themselves. Such networks have been used to model context-dependent place representations in the hippocampus [5], and to encode context-dependent stimuli in neural networks [3]. In the original latent attractor model, each attractor was triggered by a unique context pattern representing a stimulus that uniquely identi ed the context of the subsequent episode. This model was later extended to the case where contexts were triggered progressively by the sequential presentation of several stimulus patterns without regard to order. In this paper, we describe a network model that can select contexts even if the triggering stimulus patterns are interspersed among patterns irrelevant to context selection. This is closer to the way such a process would occur cognitively, where contexts are typically recognized based on a subset of sequentially perceived identi ers or cues among a larger set of perceived items

    Network capacity for latent attractor computation

    No full text
    Attractor networks have been one of the most successful paradigms in neural computation, and have been used as models of computation in the nervous system. Many experimentally observed phenomena -- such as coherent population codes, contextual representations, and replay of learned neural activity patterns -- are explained well by attractor dynamics. Recently, we proposed a paradigm called "latent attractors" where attractors embedded in a recurrent network via Hebbian learning are used to channel network response to external input rather than becoming manifest themselves. This allows the network to generate context-sensitive internal codes in complex situations. Latent attractors are particularly helpful in explaining computations within the hippocampus -- a brain region of fundamental significance for memory and spatial learning. The performance of latent attractor networks depends on the number of such attractors that a network can sustain. Following methods developed for associative memory networks, we present analytical and computational results on the capacity of latent attractor networks

    A Linear Programming Approach for Synthesis of Mixed-Signal Interface Elements

    No full text

    Progressive Attractor Selection in Latent Attractor Networks

    No full text
    Latent attractor networks are recurrent neural networks with weak embedded attractors. The attractors bias the network's response to external inputs without becoming fully manifested themselves. Latent attractor networks have been used to model context-dependent spatial representations in the hippocampus [5], and to encode context-dependent stimuli in neural networks [3]. In the current model, the selection of the biasing attractor occurred in response to an initial triggering stimulus indicating the context signal. For example, the sign on a door may set the context for the representation of a room. However, in many realistic situations, context is set by a set of cues rather than a single cue, and these cues are typically seen sequentially, though not in a particular order. The problem addressed here is: How can a latent attractor network progressively select an attractor in response to a sequence of context patterns?

    Hardware-Software Co-Design of Resource Constrained Systems on Chip in a Deep Submicron Technology

    No full text
    This paper presents a hardware-software co-design methodology for resource constrained SoC fabricated in a deep submicron process. The novelty of the methodology consists in contemplating critical hardware and layout aspects during system level design for latency optimization. The effect of interconnect parasitic and delays is considered for characterizing bus speed and data communication times. The methodology permits coarse and medium grained resource sharing across tasks for execution speed-up through superior usage of hardware. The hardwaresoftware co-design methodology executes three consecutive steps: (1) It performs combined task partitioning to processor cores, operation binding to functional unit cores, and task and communication scheduling. It also identifies minimum speed constraints for each data communication. (2) The bus architecture is synthesized, and buses are routed. IP cores are placed using a hierarchical cluster growth algorithm. Bus architecture synthesis identifies a set of possible building blocks (using the proposed PBS bitwise generation algorithm), and then assembles them together using simulated annealing algorithm. For early elimination of poor solutions, the paper suggests a special table structure and select-eliminated method. Each bus architecture is routed, and after parasitic extraction, bus speeds are characterized. (3) For the best bus architecture, the methodology re-schedules tasks, operations, and communications to minimize system latency. At this step, bus speed accounts for layout parasitic. The paper offers extensive experiments for the proposed co-design methodology, including a network processor and a JPEG SoC

    Adaptive Dynamic Modularity in a Connectionist Model of Context-Dependent Idea Generation

    No full text
    Abstract — Cognitive control- the ability to produce appro-priate behavior in complex situations- is a fundamental aspect of intelligence. It is increasingly evident that this control arises from the interaction of dynamics in several brain regions, and depends significantly on processes of modulation and dynamical biasing. While most research has focused on explanations of behavioral responses seen in experiments and pathologies, it is reasonable to expect that internal functions such as planning and thinking would also use similar control mechanisms. In this paper, we present a connectionist model for an idea generation process that can rapidly retrieve old ideas in familiar contexts and search for novel ideas in unfamiliar ones. Based on a simple reinforcement signal, the system learns context-dependent biases that represent effective internal “response systems ” for generating ideas from conceptual elements. A broad goal of the research is to show that preconfigured structural modularity, limited real-time selectivity, and adaptive modulation can interact to produce the flexible functionality necessary for cognition and intelligent behavior. I
    corecore