5 research outputs found

    Deep Learning for Reliable Storage

    Get PDF
    With the exponential increase of cloud based storage systems, it has become critical to reliably store data. Traditionally, methods for error correction have relied on duplication of data / introduction of artificial redundancy. Here, we leverage the natural redundancy present in the data using deep learning based techniques. Deep learning is a subset of machine learning algorithms that have given excellent results on a variety of tasks. We describe DNN (deep neural net based) models for learning decompression in texts compressed by Huffman coding. Firstly, we work with noiseless texts following which we work with noisy texts. Next, we outline a model for bit erasure correction. For this, we present a DNN based model for bit erasure correction in uncompressed, ASCII encoded texts. Finally, we describe a model that does bit erasure correction for Huffman code compressed texts. Such an end-to-end system can be useful for cases when the codebook / encoding algorithm is not available and decoding / error correction needs to be done

    Deep Learning for Reliable Storage

    Get PDF
    With the exponential increase of cloud based storage systems, it has become critical to reliably store data. Traditionally, methods for error correction have relied on duplication of data / introduction of artificial redundancy. Here, we leverage the natural redundancy present in the data using deep learning based techniques. Deep learning is a subset of machine learning algorithms that have given excellent results on a variety of tasks. We describe DNN (deep neural net based) models for learning decompression in texts compressed by Huffman coding. Firstly, we work with noiseless texts following which we work with noisy texts. Next, we outline a model for bit erasure correction. For this, we present a DNN based model for bit erasure correction in uncompressed, ASCII encoded texts. Finally, we describe a model that does bit erasure correction for Huffman code compressed texts. Such an end-to-end system can be useful for cases when the codebook / encoding algorithm is not available and decoding / error correction needs to be done

    When Machine Learning Meets Information Theory: Some Practical Applications to Data Storage

    Get PDF
    Machine learning and information theory are closely inter-related areas. In this dissertation, we explore topics in their intersection with some practical applications to data storage. Firstly, we explore how machine learning techniques can be used to improve data reliability in non-volatile memories (NVMs). NVMs, such as flash memories, store large volumes of data. However, as devices scale down towards small feature sizes, they suffer from various kinds of noise and disturbances, thus significantly reducing their reliability. This dissertation explores machine learning techniques to design decoders that make use of natural redundancy (NR) in data for error correction. By NR, we mean redundancy inherent in data, which is not added artificially for error correction. This work studies two different schemes for NR-based error-correcting decoders. In the first scheme, the NR-based decoding algorithm is aware of the data representation scheme (e.g., compression, mapping of symbols to bits, meta-data, etc.), and uses that information for error correction. In the second scenario, the NR-decoder is oblivious of the representation scheme and uses deep neural networks (DNNs) to recognize the file type as well as perform soft decoding on it based on NR. In both cases, these NR-based decoders can be combined with traditional error correction codes (ECCs) to substantially improve their performance. Secondly, we use concepts from ECCs for designing robust DNNs in hardware. Non-volatile memory devices like memristors and phase-change memories are used to store the weights of hardware implemented DNNs. Errors and faults in these devices (e.g., random noise, stuck-at faults, cell-level drifting etc.) might degrade the performance of such DNNs in hardware. We use concepts from analog error-correcting codes to protect the weights of noisy neural networks and to design robust neural networks in hardware. To summarize, this dissertation explores two important directions in the intersection of information theory and machine learning. We explore how machine learning techniques can be useful in improving the performance of ECCs. Conversely, we show how information-theoretic concepts can be used to design robust neural networks in hardware
    corecore