464 research outputs found

    Cyclic Low-Density MDS Array Codes

    Get PDF
    We construct two infinite families of low density MDS array codes which are also cyclic. One of these families includes the first such sub-family with redundancy parameter r > 2. The two constructions have different algebraic formulations, though they both have the same indirect structure. First MDS codes that are not cyclic are constructed and then by applying a certain mapping to their parity check matrices, non-equivalent cyclic codes with the same distance and density properties are obtained. Using the same proof techniques, a third infinite family of quasi-cyclic codes can be constructed

    Harmonic analysis of neural networks

    Get PDF
    Neural networks models have attracted a lot of interest in recent years mainly because there were perceived as a new idea for computing. These models can be described as a network in which every node computes a linear threshold function. One of the main difficulties in analyzing the properties of these networks is the fact that they consist of nonlinear elements. I will present a novel approach, based on harmonic analysis of Boolean functions, to analyze neural networks. In particular I will show how this technique can be applied to answer the following two fundamental questions (i) what is the computational power of a polynomial threshold element with respect to linear threshold elements? (ii) Is it possible to get exponentially many spurious memories when we use the outer-product method for programming the Hopfield model

    Some new EC/AUED codes

    Get PDF
    A novel construction that differs from the traditional way of constructing systematic EC/AUED/(error-correcting/all unidirectional error-detecting) codes is presented. The usual method is to take a systematic t-error-correcting code and then append a tail so that the code can detect more than t errors when they are unidirectional. In the authors' construction, the t-error-correcting code is modified in such a way that the weight distribution of the original code is reduced. The authors then have to add a smaller tail. Frequently they have less redundancy than the best available systematic t-EC/AUED codes

    X-code: MDS array codes with optimal encoding

    Get PDF
    We present a new class of MDS (maximum distance separable) array codes of size n×n (n a prime number) called X-code. The X-codes are of minimum column distance 3, namely, they can correct either one column error or two column erasures. The key novelty in X-code is that it has a simple geometrical construction which achieves encoding/update optimal complexity, i.e., a change of any single information bit affects exactly two parity bits. The key idea in our constructions is that all parity symbols are placed in rows rather than columns

    An on-line algorithm for checkpoint placement

    Get PDF
    Checkpointing is a common technique for reducing the time to recover from faults in computer systems. By saving intermediate states of programs in a reliable storage, checkpointing enables to reduce the lost processing time caused by faults. The length of the intervals between checkpoints affects the execution time of programs. Long intervals lead to long re-processing time, while too frequent checkpointing leads to high checkpointing overhead. In this paper we present an on-line algorithm for placement of checkpoints. The algorithm uses on-line knowledge of the current cost of a checkpoint when it decides whether or not to place a checkpoint. We show how the execution time of a program using this algorithm can be analyzed. The total overhead of the execution time when the proposed algorithm is used is smaller than the overhead when fixed intervals are used. Although the proposed algorithm uses only on-line knowledge about the cost of checkpointing, its behavior is close to the off-line optimal algorithm that uses a complete knowledge of checkpointing cost

    Rewriting Codes for Joint Information Storage in Flash Memories

    Get PDF
    Memories whose storage cells transit irreversibly between states have been common since the start of the data storage technology. In recent years, flash memories have become a very important family of such memories. A flash memory cell has q states—state 0.1.....q-1 - and can only transit from a lower state to a higher state before the expensive erasure operation takes place. We study rewriting codes that enable the data stored in a group of cells to be rewritten by only shifting the cells to higher states. Since the considered state transitions are irreversible, the number of rewrites is bounded. Our objective is to maximize the number of times the data can be rewritten. We focus on the joint storage of data in flash memories, and study two rewriting codes for two different scenarios. The first code, called floating code, is for the joint storage of multiple variables, where every rewrite changes one variable. The second code, called buffer code, is for remembering the most recent data in a data stream. Many of the codes presented here are either optimal or asymptotically optimal. We also present bounds to the performance of general codes. The results show that rewriting codes can integrate a flash memory’s rewriting capabilities for different variables to a high degree

    Networks of Relations for Representation, Learning, and Generalization

    Get PDF
    We propose representing knowledge as a network of relations. Each relation relates only a few continuous or discrete variables, so that any overall relationship among the many variables treated by the network winds up being distributed throughout the network. Each relation encodes which combinations of values correspond to past experience for the variables related by the relation. Variables may or may not correspond to understandable aspects of the situation being modeled by the network. A distributed calculational process can be used to access the information stored in such a network, allowing the network to function as an associative memory. This process in its simplest form is purely inhibitory, narrowing down the space of possibilities as much as possible given the data to be matched. In contrast with methods that always retrieve a best fit for all variables, this method can return values for inferred variables while leaving non-inferable variables in an unknown or partially known state. In contrast with belief propagation methods, this method can be proven to converge quickly and uniformly for any network topology, allowing networks to be as interconnected as the relationships warrant, with no independence assumptions required. The generalization properties of such a memory are aligned with the network's relational representation of how the various aspects of the modeled situation are related

    Increasing the Information Density of Storage Systems Using the Precision-Resolution Paradigm

    Get PDF
    Arguably, the most prominent constrained system in storage applications is the (d, k)-RLL (Run-Length Limited) system, where every binary sequence obeys the constraint that every two adjacent 1's are separated by at least d consecutive 0's and at most k consecutive 0's, namely, runs of 0's are length limited. The motivation for the RLL constraint arises mainly from the physical limitations of the read and write technologies in magnetic and optical storage systems. We revisit the rationale for the RLL system and reevaluate its relationship to the physical media. As a result, we introduce a new paradigm that better matches the physical constraints. We call the new paradigm the Precision-Resolution (PR) system, where the write operation is limited by precision and the read operation is limited by resolution. We compute the capacity of a general PR system and demonstrate that it provides a significant increase in the information density compared to the traditional RLL system (for identical physical limitations). For example, the capacity of the (2, 10)-RLL used in CD-ROMs and DVDs is approximately 0.5418, while our PR system provides the capacity of about 0.7725, resulting in a potential increase of about 40% in information density

    Analysis of checkpointing schemes for multiprocessor systems

    Get PDF
    Parallel computing systems provide hardware redundancy that helps to achieve low cost fault-tolerance, by duplicating the task into more than a single processor, and comparing the states of the processors at checkpoints. This paper suggests a novel technique, based on a Markov Reward Model (MRM), for analyzing the performance of checkpointing schemes with task duplication. We show how this technique can be used to derive the average execution time of a task and other important parameters related to the performance of checkpointing schemes. Our analytical results match well the values we obtained using a simulation program. We compare the average task execution time and total work of four checkpointing schemes, and show that generally increasing the number of processors reduces the average execution time, but increases the total work done by the processors. However, in cases where there is a big difference between the time it takes to perform different operations, those results can change
    corecore