467 research outputs found

    Neural network mechanisms of working memory interference

    Get PDF
    [eng] Our ability to memorize is at the core of our cognitive abilities. How could we effectively make decisions without considering memories of previous experiences? Broadly, our memories can be divided in two categories: long-term and short-term memories. Sometimes, short-term memory is also called working memory and throughout this thesis I will use both terms interchangeably. As the names suggest, long-term memory is the memory you use when you remember concepts for a long time, such as your name or age, while short-term memory is the system you engage while choosing between different wines at the liquor store. As your attention jumps from one bottle to another, you need to hold in memory characteristics of previous ones to pick your favourite. By the time you pick your favourite bottle, you might remember the prices or grape types of the other bottles, but you are likely to forget all of those details an hour later at home, opening the wine in front of your guests. The overall goal of this thesis is to study the neural mechanisms that underlie working memory interference, as reflected in quantitative, systematic behavioral biases. Ultimately, the goal of each chapter, even when focused exclusively on behavioral experiments, is to nail down plausible neural mechanisms that can produce specific behavioral and neurophysiological findings. To this end, we use the bump-attractor model as our working hypothesis, with which we often contrast the synaptic working memory model. The work performed during this thesis is described here in 3 main chapters, encapsulation 5 broad goals: In Chapter 4.1, we aim at testing behavioral predictions of a bump-attractor (1) network when used to store multiple items. Moreover, we connected two of such networks aiming to model feature-binding through selectivity synchronization (2). In Chapter 4.2, we aim to clarify the mechanisms of working memory interference from previous memories (3), the so-called serial biases. These biases provide an excellent opportunity to contrast activity-based and activity-silent mechanisms because both mechanisms have been proposed to be the underlying cause of those biases. In Chapter 4.3, armed with the same techniques used to seek evidence for activity-silent mechanisms, we test a prediction of the bump-attractor model with short-term plasticity (4). Finally, in light of the results from aim 4 and simple computer simulations, we reinterpret previous studies claiming evidence for activity-silent mechanisms (5)

    A Data Requisition Treatment Instrument For Clinical Quantifiable Soft Tissue Manipulation

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)Soft tissue manipulation is a widely used practice by manual therapists from a variety of healthcare disciplines to evaluate and treat neuromusculoskeletal impairments using mechanical stimulation either by hand massage or specially-designed tools. The practice of a specific approach of targeted pressure application using distinguished rigid mechanical tools to breakdown adhesions, scar tissues and improve range of motion for affected joints is called Instrument-Assisted Soft Tissue Manipulation (IASTM). The efficacy of IASTM has been demonstrated as a means to improve mobility of joints, reduce pain, enhance flexibility and restore function. However, unlike the techniques of ultrasound, traction, electrical stimulation, etc. the practice of IASTM doesn't involve any standard to objectively characterize massage with physical parameters. Thus, most IASTM treatments are subjective to practitioner or patient subjective feedback, which essentially addresses a need to quantify therapeutic massage or IASTM treatment with adequate treatment parameters to document, better analyze, compare and validate STM treatment as an established, state-of-the-art practice. This thesis focuses on the development and implementation of Quantifiable Soft Tissue Manipulation (QSTM™) Technology by designing an ergonomic, portable and miniaturized wired localized pressure applicator medical device (Q1), for characterizing soft tissue manipulation. Dose-load response in terms of forces in Newtons; pitch angle of the device ; stroke frequency of massage measured within stipulated time of treatment; all in real-time has been captured to characterize a QSTM session. A QSTM PC software (Q-WARE©) featuring a Treatment Record System subjective to individual patients to save and retrieve treatment diagnostics and a real-time graphical visual monitoring system has been developed from scratch on WINDOWS platform to successfully implement the technology. This quantitative analysis of STM treatment without visual monitoring has demonstrated inter-reliability and intra-reliability inconsistencies by clinicians in STM force application. While improved consistency of treatment application has been found when using visual monitoring from the QSTM feedback system. This system has also discriminated variabilities in application of high, medium and low dose-loads and stroke frequency analysis during targeted treatment sessions.2023-04-2

    Beta oscillations underlie top-down, feedback control while gamma oscillations reflect bottom-up, feedforward influences

    Get PDF
    Prefrontal cortex (PFC) is critical to behavioral flexibility and, hence, the top-down control over bottom-up sensory information. The mechanisms underlying this capacity have been hypothesized to involve the propagation of alpha/beta (8-30 Hz) oscillations via feedback connections to sensory regions. In contrast, gamma (30-160 Hz) oscillations are thought to arise as a function of bottom-up, feedforward stimulation. To test the hypothesis that such oscillatory phenomena embody such functional roles, we assessed the performance of nine monkeys on tasks of learning, categorization, and working memory concurrent with recording of local field potentials (LFPs) from PFC. The first set of tasks consisted of two classes of learning: one, explicit and, another, implicit. Explicit learning is a conscious process that demands top-down control, and in these tasks alpha/beta oscillations tracked learning. In contrast, implicit learning is an unconscious process that is automatic (i.e. bottom up), and in this task alpha/beta oscillations did not track learning. We next looked at dot-pattern categorization. In this task, category exemplars were generated by jittering the dot locations of a prototype. By chance, some of these exemplars were similar to the prototype (low distortion), and others were not (high distortion). Behaviorally, the monkeys performed well on both distortion levels. However, alpha/beta band oscillations carried more category information at high distortions, while gamma-band category information was greatest on low distortions. Overall, the greater the need for top-down control (i.e. high distortion), the greater the beta, and the lesser the need (i.e. low distortion), the greater the gamma. Finally, laminar electrodes were used to record from animals trained on working memory tasks. Each laminar probe was lowered so that its set of contacts sampled all cortical layers. During these tasks, gamma oscillations peaked in superficial layers, while alpha/beta peaked in deep layers. Moreover, these deep-layer alpha/beta oscillations entrained superficial alpha/beta, and modulated the amplitude of superficial-layer gamma oscillations. These laminar distinctions are consistent with anatomy: feedback neurons originate in deep layers and feedforward neurons in superficial layers. In summary, alpha/beta oscillations reflect top-down control and feedback connectivity, while gamma oscillations reflect bottom-up processes and feedforward connectivity

    Design and Code Optimization for Systems with Next-generation Racetrack Memories

    Get PDF
    With the rise of computationally expensive application domains such as machine learning, genomics, and fluids simulation, the quest for performance and energy-efficient computing has gained unprecedented momentum. The significant increase in computing and memory devices in modern systems has resulted in an unsustainable surge in energy consumption, a substantial portion of which is attributed to the memory system. The scaling of conventional memory technologies and their suitability for the next-generation system is also questionable. This has led to the emergence and rise of nonvolatile memory ( NVM ) technologies. Today, in different development stages, several NVM technologies are competing for their rapid access to the market. Racetrack memory ( RTM ) is one such nonvolatile memory technology that promises SRAM -comparable latency, reduced energy consumption, and unprecedented density compared to other technologies. However, racetrack memory ( RTM ) is sequential in nature, i.e., data in an RTM cell needs to be shifted to an access port before it can be accessed. These shift operations incur performance and energy penalties. An ideal RTM , requiring at most one shift per access, can easily outperform SRAM . However, in the worst-cast shifting scenario, RTM can be an order of magnitude slower than SRAM . This thesis presents an overview of the RTM device physics, its evolution, strengths and challenges, and its application in the memory subsystem. We develop tools that allow the programmability and modeling of RTM -based systems. For shifts minimization, we propose a set of techniques including optimal, near-optimal, and evolutionary algorithms for efficient scalar and instruction placement in RTMs . For array accesses, we explore schedule and layout transformations that eliminate the longer overhead shifts in RTMs . We present an automatic compilation framework that analyzes static control flow programs and transforms the loop traversal order and memory layout to maximize accesses to consecutive RTM locations and minimize shifts. We develop a simulation framework called RTSim that models various RTM parameters and enables accurate architectural level simulation. Finally, to demonstrate the RTM potential in non-Von-Neumann in-memory computing paradigms, we exploit its device attributes to implement logic and arithmetic operations. As a concrete use-case, we implement an entire hyperdimensional computing framework in RTM to accelerate the language recognition problem. Our evaluation shows considerable performance and energy improvements compared to conventional Von-Neumann models and state-of-the-art accelerators
    • …
    corecore