116 research outputs found
Spatio-Temporal Modeling for Flash Memory Channels Using Conditional Generative Nets
We propose a data-driven approach to modeling the spatio-temporal
characteristics of NAND flash memory read voltages using conditional generative
networks. The learned model reconstructs read voltages from an individual
memory cell based on the program levels of the cell and its surrounding cells,
as well as the specified program/erase (P/E) cycling time stamp. We evaluate
the model over a range of time stamps using the cell read voltage
distributions, the cell level error rates, and the relative frequency of errors
for patterns most susceptible to inter-cell interference (ICI) effects. We
conclude that the model accurately captures the spatial and temporal features
of the flash memory channel
Stash in a Flash
Encryption is a useful tool to protect data confidentiality. Yet it is still challenging to hide the very presence of encrypted, secret data from a powerful adversary. This paper presents a new technique to hide data in flash by manipulating the voltage level of pseudo-randomlyselected flash cells to encode two bits (rather than one) in the cell. In this model, we have one “public” bit interpreted using an SLC-style encoding, and extract a private bit using an MLC-style encoding. The locations of cells that encode hidden data is based on a secret key known only to the hiding user.
Intuitively, this technique requires that the voltage level in a cell encoding data must be (1) not statistically distinguishable from a cell only storing public data, and (2) the user must be able to reliably read the hidden data from this cell. Our key insight is that there is a wide enough variation in the range of voltage levels in a typical flash device to obscure the presence of fine-grained changes to a small fraction of the cells, and that the variation is wide enough to support reliably re-reading hidden data. We demonstrate that our hidden data and underlying voltage manipulations go undetected by support vector machine based supervised learning which performs similarly to a random guess. The error rates of our scheme are low enough that the data is recoverable months after being stored. Compared to prior work, our technique provides 24x and 50x higher encoding and decoding throughput and doubles the capacity, while being 37x more power efficient
LP-decodable multipermutation codes
In this paper, we introduce a new way of constructing and decoding
multipermutation codes. Multipermutations are permutations of a multiset that
may consist of duplicate entries. We first introduce a new class of matrices
called multipermutation matrices. We characterize the convex hull of
multipermutation matrices. Based on this characterization, we propose a new
class of codes that we term LP-decodable multipermutation codes. Then, we
derive two LP decoding algorithms. We first formulate an LP decoding problem
for memoryless channels. We then derive an LP algorithm that minimizes the
Chebyshev distance. Finally, we show a numerical example of our algorithm.Comment: This work was supported by NSF and NSERC. To appear at the 2014
Allerton Conferenc
- …