94 research outputs found

    Brain rhythms: How control gets into working memory

    Get PDF
    New research suggests that frontal midline theta EEG activity in humans controls activity in parietal cortex associated with memory maintenance. In turn, the speed of this frontal theta is modulated by the number of items to be handled, potentially indicating strong bidirectional communication within a fronto-parietal network

    Quantized Indexing: Beyond Arithmetic Coding

    Full text link
    Quantized Indexing is a fast and space-efficient form of enumerative (combinatorial) coding, the strongest among asymptotically optimal universal entropy coding algorithms. The present advance in enumerative coding is similar to that made by arithmetic coding with respect to its unlimited precision predecessor, Elias coding. The arithmetic precision, execution time, table sizes and coding delay are all reduced by a factor O(n) at a redundancy below 2*log(e)/2^g bits/symbol (for n input symbols and g-bit QI precision). Due to its tighter enumeration, QI output redundancy is below that of arithmetic coding (which can be derived as a lower accuracy approximation of QI). The relative compression gain vanishes in large n and in high entropy limits and increases for shorter outputs and for less predictable data. QI is significantly faster than the fastest arithmetic coders, from factor 6 in high entropy limit to over 100 in low entropy limit (`typically' 10-20 times faster). These speedups are result of using only 3 adds, 1 shift and 2 array lookups (all in 32 bit precision) per less probable symbol and no coding operations for the most probable symbol . Further, the exact enumeration algorithm is sharpened and its lattice walks formulation is generalized. A new numeric type with a broader applicability, sliding window integer, is introduced.Comment: Submitted to DCC-200

    Decoding dynamic brain patterns from evoked responses: A tutorial on multivariate pattern analysis applied to time-series neuroimaging data

    Get PDF
    Multivariate pattern analysis (MVPA) or brain decoding methods have become standard practice in analysing fMRI data. Although decoding methods have been extensively applied in Brain Computing Interfaces (BCI), these methods have only recently been applied to time-series neuroimaging data such as MEG and EEG to address experimental questions in Cognitive Neuroscience. In a tutorial-style review, we describe a broad set of options to inform future time-series decoding studies from a Cognitive Neuroscience perspective. Using example MEG data, we illustrate the effects that different options in the decoding analysis pipeline can have on experimental results where the aim is to 'decode' different perceptual stimuli or cognitive states over time from dynamic brain activation patterns. We show that decisions made at both preprocessing (e.g., dimensionality reduction, subsampling, trial averaging) and decoding (e.g., classifier selection, cross-validation design) stages of the analysis can significantly affect the results. In addition to standard decoding, we describe extensions to MVPA for time-varying neuroimaging data including representational similarity analysis, temporal generalisation, and the interpretation of classifier weight maps. Finally, we outline important caveats in the design and interpretation of time-series decoding experiments.Comment: 64 pages, 15 figure

    An Implementation of Elias Coding for Input-Restricted Channels

    Get PDF
    An implementation of Elias coding for input-restricted channels is presented and analyzed. This is a variable-to-fixed length coding method that uses finite-precision arithmetic and can work at rates arbitrarily close to channel capacity as the precision is increased. The method offers a favorable tradeoff between complexity and coding efficiency. For example, in experiments with the 12, 7] runlength constrained channel, a coding efficiency of 0.9977 is observed, which is significantly better than what is achievable by other known methods of comparable complexity. © 1990 IEE

    Instantly Decodable Network Coding: From Centralized to Device-to-Device Communications

    Get PDF
    From its introduction to its quindecennial, network coding has built a strong reputation for enhancing packet recovery and achieving maximum information flow in both wired and wireless networks. Traditional studies focused on optimizing the throughput of the system by proposing elaborate schemes able to reach the network capacity. With the shift toward distributed computing on mobile devices, performance and complexity become both critical factors that affect the efficiency of a coding strategy. Instantly decodable network coding presents itself as a new paradigm in network coding that trades off these two aspects. This paper review instantly decodable network coding schemes by identifying, categorizing, and evaluating various algorithms proposed in the literature. The first part of the manuscript investigates the conventional centralized systems, in which all decisions are carried out by a central unit, e.g., a base-station. In particular, two successful approaches known as the strict and generalized instantly decodable network are compared in terms of reliability, performance, complexity, and packet selection methodology. The second part considers the use of instantly decodable codes in a device-to-device communication network, in which devices speed up the recovery of the missing packets by exchanging network coded packets. Although the performance improvements are directly proportional to the computational complexity increases, numerous successful schemes from both the performance and complexity viewpoints are identified

    A general construction of constrained parity-check codes for optical recording

    Get PDF
    This paper proposes a general and systematic code design method to efficiently combine constrained codes with parity-check (PC) codes for optical recording. The proposed constrained PC code includes two component codes: the normal constrained (NC) code and the parity-related constrained (PRC) code. They are designed based on the same finite state machine (FSM). The rates of the designed codes are only a few tenths below the theoretical maximum. The PC constraint is defined by the generator matrix (or generator polynomial) of a linear binary PC code, which can detect any type of dominant error events or error event combinations of the system. Error propagation due to parity bits is avoided, since both component codes are protected by PCs. Two approaches are proposed to design the code in the non-return-to-zero-inverse (NRZI) format and the non-return-to-zero (NRZ) format, respectively. Designing the codes in NRZ format may reduce the number of parity bits required for error detection and simplify post-processing for error correction. Examples of several newly designed codes are illustrated. Simulation results with the Blu-Ray disc (BD) systems show that the new d = 1 constrained 4-bit PC code significantly outperforms the rate 2/3 code without parity, at both nominal density and high density
    corecore