43,562 research outputs found

    Motor Learning Mechanism on the Neuron Scale

    Full text link
    Based on existing data, we wish to put forward a biological model of motor system on the neuron scale. Then we indicate its implications in statistics and learning. Specifically, neuron firing frequency and synaptic strength are probability estimates in essence. And the lateral inhibition also has statistical implications. From the standpoint of learning, dendritic competition through retrograde messengers is the foundation of conditional reflex and grandmother cell coding. And they are the kernel mechanisms of motor learning and sensory motor integration respectively. Finally, we compare motor system with sensory system. In short, we would like to bridge the gap between molecule evidences and computational models.Comment: 8 pages, 4 figure

    Neural Mechanism of Language

    Get PDF
    This paper is based on our previous work on neural coding. It is a self-organized model supported by existing evidences. Firstly, we briefly introduce this model in this paper, and then we explain the neural mechanism of language and reasoning with it. Moreover, we find that the position of an area determines its importance. Specifically, language relevant areas are in the capital position of the cortical kingdom. Therefore they are closely related with autonomous consciousness and working memories. In essence, language is a miniature of the real world. Briefly, this paper would like to bridge the gap between molecule mechanism of neurons and advanced functions such as language and reasoning.Comment: 6 pages, 3 figure

    A Quantitative Neural Coding Model of Sensory Memory

    Full text link
    The coding mechanism of sensory memory on the neuron scale is one of the most important questions in neuroscience. We have put forward a quantitative neural network model, which is self organized, self similar, and self adaptive, just like an ecosystem following Darwin theory. According to this model, neural coding is a mult to one mapping from objects to neurons. And the whole cerebrum is a real-time statistical Turing Machine, with powerful representing and learning ability. This model can reconcile some important disputations, such as: temporal coding versus rate based coding, grandmother cell versus population coding, and decay theory versus interference theory. And it has also provided explanations for some key questions such as memory consolidation, episodic memory, consciousness, and sentiment. Philosophical significance is indicated at last.Comment: 9 pages, 3 figure

    A 64-point Fourier transform chip for high-speed wireless LAN application using OFDM

    No full text
    In this article, we present a novel fixed-point 16-bit word-width 64-point FFT/IFFT processor developed primarily for the application in the OFDM based IEEE 802.11a Wireless LAN (WLAN) baseband processor. The 64-point FFT is realized by decomposing it into a 2-D structure of 8-point FFTs. This approach reduces the number of required complex multiplications compared to the conventional radix-2 64-point FFT algorithm. The complex multiplication operations are realized using shift-and-add operations. Thus, the processor does not use any 2-input digital multiplier. It also does not need any RAM or ROM for internal storage of coefficients. The proposed 64-point FFT/IFFT processor has been fabricated and tested successfully using our in-house 0.25 ?m BiCMOS technology. The core area of this chip is 6.8 mm2. The average dynamic power consumption is 41 mW @ 20 MHz operating frequency and 1.8 V supply voltage. The processor completes one parallel-to-parallel (i. e., when all input data are available in parallel and all output data are generated in parallel) 64-point FFT computation in 23 cycles. These features show that though it has been developed primarily for application in the IEEE 802.11a standard, it can be used for any application that requires fast operation as well as low power consumption

    Hardware-Efficient Structure of the Accelerating Module for Implementation of Convolutional Neural Network Basic Operation

    Full text link
    This paper presents a structural design of the hardware-efficient module for implementation of convolution neural network (CNN) basic operation with reduced implementation complexity. For this purpose we utilize some modification of the Winograd minimal filtering method as well as computation vectorization principles. This module calculate inner products of two consecutive segments of the original data sequence, formed by a sliding window of length 3, with the elements of a filter impulse response. The fully parallel structure of the module for calculating these two inner products, based on the implementation of a naive method of calculation, requires 6 binary multipliers and 4 binary adders. The use of the Winograd minimal filtering method allows to construct a module structure that requires only 4 binary multipliers and 8 binary adders. Since a high-performance convolutional neural network can contain tens or even hundreds of such modules, such a reduction can have a significant effect.Comment: 3 pages, 5 figure

    Selective data segment monitoring system

    Get PDF
    High speed data monitoring apparatus is described for displaying the bit pattern of a selected portion of a block of transmitted data comprising a shift register for receiving the transmitted data and for temporarily containing the consecutive data bits. A programmable sync detector for monitoring the contents of the shift register and for generating a sync signal when the shift register contains a predetermined sync code is included. A counter is described for counting the data bits input to the shift register after the sync signal is generated and for generating a count complete signal when a selected number of data bits have been input to the register. A data storage device is used for storing the contents of the shift register at the time the count complete signal is generated
    corecore