50 research outputs found

    Iterative Programming of Noisy Memory Cells

    Get PDF
    In this paper, we study a model, which was first presented by Bunte and Lapidoth, that mimics the programming operation of memory cells. Under this paradigm we assume that cells are programmed sequentially and individually. The programming process is modeled as transmission over a channel, while it is possible to read the cell state in order to determine its programming success, and in case of programming failure, to reprogram the cell again. Reprogramming a cell can reduce the bit error rate, however this comes with the price of increasing the overall programming time and thereby affecting the writing speed of the memory. An iterative programming scheme is an algorithm which specifies the number of attempts to program each cell. Given the programming channel and constraints on the average and maximum number of attempts to program a cell, we study programming schemes which maximize the number of bits that can be reliably stored in the memory. We extend the results by Bunte and Lapidoth and study this problem when the programming channel is either the BSC, BEC, or Z channel. For the BSC and the BEC our analysis is also extended for the case where the error probabilities on consecutive writes are not necessarily the same. Lastly, we also study a related model which is motivated by the synthesis process of DNA molecules

    Iterative Programming of Noisy Memory Cells

    Get PDF
    In this paper, we study a model, which was first presented by Bunte and Lapidoth, that mimics the programming operation of memory cells. Under this paradigm we assume that cells are programmed sequentially and individually. The programming process is modeled as transmission over a channel, while it is possible to read the cell state in order to determine its programming success, and in case of programming failure, to reprogram the cell again. Reprogramming a cell can reduce the bit error rate, however this comes with the price of increasing the overall programming time and thereby affecting the writing speed of the memory. An iterative programming scheme is an algorithm which specifies the number of attempts to program each cell. Given the programming channel and constraints on the average and maximum number of attempts to program a cell, we study programming schemes which maximize the number of bits that can be reliably stored in the memory. We extend the results by Bunte and Lapidoth and study this problem when the programming channel is either the BSC, BEC, or ZZ channel. For the BSC and the BEC our analysis is also extended for the case where the error probabilities on consecutive writes are not necessarily the same. Lastly, we also study a related model which is motivated by the synthesis process of DNA molecules.Comment: 10 pages, 2 figure

    Rewritable storage channels with hidden state

    Get PDF
    Many storage channels admit reading and rewriting of the content at a given cost. We consider rewritable channels with a hidden state which models the unknown characteristics of the memory cell. In addition to mitigating the effect of the write noise, rewrites can help the write controller obtain a better estimate of the hidden state. The paper has two contributions. The first is a lower bound on the capacity of a general rewritable channel with hidden state. The lower bound is obtained using a coding scheme that combines Gelfand-Pinsker coding with superposition coding. The rewritable AWGN channel is discussed as an example. The second contribution is a simple coding scheme for a rewritable channel where the write noise and hidden state are both uniformly distributed. It is shown that this scheme is asymptotically optimal as the number of rewrites gets large

    Iterative Programming of Noisy Memory Cells

    Get PDF
    In this paper, we study a model, which was first presented by Bunte and Lapidoth, that mimics the programming operation of memory cells. Under this paradigm we assume that cells are programmed sequentially and individually. The programming process is modeled as transmission over a channel, while it is possible to read the cell state in order to determine its programming success, and in case of programming failure, to reprogram the cell again. Reprogramming a cell can reduce the bit error rate, however this comes with the price of increasing the overall programming time and thereby affecting the writing speed of the memory. An iterative programming scheme is an algorithm which specifies the number of attempts to program each cell. Given the programming channel and constraints on the average and maximum number of attempts to program a cell, we study programming schemes which maximize the number of bits that can be reliably stored in the memory. We extend the results by Bunte and Lapidoth and study this problem when the programming channel is either the BSC, BEC, or Z channel. For the BSC and the BEC our analysis is also extended for the case where the error probabilities on consecutive writes are not necessarily the same. Lastly, we also study a related model which is motivated by the synthesis process of DNA molecules

    Multi-input distributed classifiers for synthetic genetic circuits

    Full text link
    For practical construction of complex synthetic genetic networks able to perform elaborate functions it is important to have a pool of relatively simple "bio-bricks" with different functionality which can be compounded together. To complement engineering of very different existing synthetic genetic devices such as switches, oscillators or logical gates, we propose and develop here a design of synthetic multiple input distributed classifier with learning ability. Proposed classifier will be able to separate multi-input data, which are inseparable for single input classifiers. Additionally, the data classes could potentially occupy the area of any shape in the space of inputs. We study two approaches to classification, including hard and soft classification and confirm the schemes of genetic networks by analytical and numerical results

    Rescuing the legacy project: a case study in digital preservation and technical obsolescence

    Get PDF
    The ability to maintain continuous access to digital documents and artifacts is one of the most significant problems facing the archival, manuscript repository, and record management communities in the twenty-first century. This problem with access is particularly troublesome in the case of complex digital installments, which resist simple migration and emulation strategies. The Legacy Project, which was produced by the William Breman Jewish Heritage Museum in Atlanta, was created in the early 2000s as a means of telling the stories of Holocaust survivors who settled in metropolitan Atlanta. Legacy was an interactive multimedia kiosk that enabled museum visitors to read accounts, watch digital video, and examine photographs about these survivors. However, several years after Legacy was completed, it became inoperable, due to technological obsolescence. By using Legacy as a case study, I examine how institutions can preserve access to complex digital artifacts and how they can rescue digital information that is in danger of being lost.M.S.Committee Chair: Knoespel, Kenneth; Committee Member: Burnett, Rebecca; Committee Member: Fox Harrell; Committee Member: TyAnna Herringto

    DNA–based data storage system

    Get PDF
    Despite the many advances in traditional data recording techniques, the surge of Big Data platforms and energy conservation issues has imposed new challenges to the storage community in terms of identifying extremely high volume, non-volatile and durable recording media. The potential for using macromolecules for ultra-dense storage was recognized as early as 1959 when Richard Feynman outlined his vision for nanotechnology in a lecture, “There is plenty of room at the bottom”. Among known macromolecules, DNA is unique insofar as it lends itself to implementations of non-volatile recording media of outstanding integrity and extremely high storage capacity. The basic system implementation steps for DNA-based data storage systems include synthesizing DNA strings that contain user information and subsequently retrieving them via high-throughput sequencing technologies. Existing architectures enable reading and writing but do not offer random-access and error-free data recovery from low-cost, portable devices, which is crucial for making the storage technology competitive with classical recorders. In this work we advance the field of macromolecular data storage in three directions. First, we introduce the notion of weakly mutually uncorrelated (WMU) sequences. WMU sequences are characterized by the property that no sufficiently long suffix of one sequence is the prefix of the same or another sequence. For this purpose, WMU sequences used for primer design in DNAbased data storage systems are also required to be at large mutual Hamming distance from each other, have balanced compositions of symbols, and avoid primer-dimer byproducts. We derive bounds on the size of WMU and various constrained WMU codes and present a number of constructions for balanced, error-correcting, primer-dimer free WMU codes using Dyck paths, prefixsynchronized and cyclic codes. Second, we describe the first DNA-based storage architecture that enables random access to data blocks and rewriting of information stored at arbitrary locations within the blocks. The newly developed architecture overcomes drawbacks of existing read-only methods that require decoding the whole file in order to read one data fragment. Our system is based on the newly developed WMU coding techniques and accompanying DNA editing methods that ensure data reliability, specificity and sensitivity of access, and at the same time provide exceptionally high data storage capacity. As a proof of concept, we encoded parts of the Wikipedia pages of six universities in the USA, and selected and edited parts of the text written in DNA corresponding to three of these schools. The results suggest that DNA is a versatile media suitable for both ultrahigh density archival and rewritable storage applications. Third, we demonstrate for the first time that a portable, random-access platform may be implemented in practice using nanopore sequencers. Every solution for DNA-based data storage systems so far has exclusively focused on Illumina sequencing devices, but such sequencers are expensive and designed for laboratory use only. Instead, we propose using a new technology, MinION–Oxford Nanopore’s handheld sequencer. Nanopore sequencing is fast and cheap, but it results in reads with high error rates. To deal with this issue, we designed an integrated processing pipeline that encodes data to avoid costly synthesis and sequencing errors, enables random access through addressing, and leverages efficient portable sequencing via new iterative alignment and deletion error-correcting codes. As a proof of concept, we stored and sequenced around 3.6 kB of binary data that includes two compressed images (a Citizen Kane poster and a smiley face emoji), using a portable data storage system, and obtained error-free read-outs

    Coding for Phase Change Memory Performance Optimization

    Get PDF
    Over the past several decades, memory technologies have exploited continual scaling of CMOS to drastically improve performance and cost. Unfortunately, charge-based memories become unreliable beyond 20 nm feature sizes. A promising alternative is Phase-Change-Memory (PCM) which leverages scalable resistive thermal mechanisms. To realize PCM's potential, a number of challenges, including the limited wear-endurance and costly writes, need to be addressed. This thesis introduces novel methodologies for encoding data on PCM which exploit asymmetries in read/write performance to minimize memory's wear/energy consumption. First, we map the problem to a distance-based graph clustering problem and prove it is NP-hard. Next, we propose two different approaches: an optimal solution based on Integer-Linear-Programming, and an approximately-optimal solution based on Dynamic-Programming. Our methods target both single-level and multi-level cell PCM and provide further optimizations for stochastically-distributed data. We devise a low overhead hardware architecture for the encoder. Evaluations demonstrate significant performance gains of our framework

    Megatextual Readings: Accessing an Archive of Korean/American Constructions

    Get PDF
    This dissertation formulates an approach to reading Korean/American narratives through what I call a "megatext" in order to understand the uneven and dynamic production of Korean/Americanness. By advancing a "megatextual" approach to conceiving of identity and politics, I argue for a way of addressing the critical gap Asian Americanist practitioners continue to witness between activist demands for social justice and scholarly articulations of those demands. A megatextual approach seeks to be an alternative reading practice that bridges different realms of knowledge production. Megatexts argue for a practice of reading across an archive in which texts are actively cross-referencing each other. This approach is essential to the way we apprehend knowledge in the current economy. I define the overarching term "megatext" as a rewritable archive of information and meaning within which the processes of archiving and interpretation are taking place at the same time. I identify particular theoretical concepts leading into my formulation of megatexts and argue the political significance of this approach in terms of Asian American studies and public intellectualism. Then, I define and apply the term "Korean/American" in order to refer to the broad body of work constituting here a "Korean/American megatext." The convergences among the various discourses referenced by megatexts demonstrate how they are useful for bridging different realms. Lastly, I identify the significant constructions of "Korea" in the media as impacting Korean/American ethnic identity formations in order to establish my focus on contemporary Korean/Americanness. I apply this focus and formulate megatexts for each chapter based on individual Korean/American authors and the texts and discourses they reference. Chapter one examines a megatext of Chang-rae Lee's novels, authorship, and popularity. Chapter two expands on the concept of authorship and discusses Don Lee and his collection, Yellow, as evidence of the commodification of author and text. Chapter three examines Korean/American women's bodies in Nora Okja Keller's novels as emblematic of the gendered, neocolonial U.S.-Korea relationship. This dissertation emphasizes the importance of reading the dynamic elements of narratives as a way of contending with the shifting and relational nature of the meanings that accrue to Korean/Americanness

    Towards Endurable, Reliable and Secure Flash Memories-a Coding Theory Application

    Get PDF
    Storage systems are experiencing a historical paradigm shift from hard disk to nonvolatile memories due to its advantages such as higher density, smaller size and non-volatility. On the other hand, Solid Storage Disk (SSD) also poses critical challenges to application and system designers. The first challenge is called endurance. Endurance means flash memory can only experience a limited number of program/erase cycles, and after that the cell quality degradation can no longer be accommodated by the memory system fault tolerance capacity. The second challenge is called reliability, which means flash cells are sensitive to various noise and disturbs, i.e., data may change unintentionally after experiencing noise/disturbs. The third challenge is called security, which means it is impossible or costly to delete files from flash memory securely without leaking information to possible eavesdroppers. In this dissertation, we first study noise modeling and capacity analysis for NAND flash memories (which is the most popular flash memory in market), which gains us some insight on how flash memories are working and their unique noise. Second, based on the characteristics of content-replication codewords in flash memories, we propose a joint decoder to enhance the flash memory reliability. Third, we explore data representation schemes in flash memories and optimal rewriting code constructions in order to solve the endurance problem. Fourth, in order to make our rewriting code more practical, we study noisy write-efficient memories and Write-Once Memory (WOM) codes against inter-cell interference in NAND memories. Finally, motivated by the secure deletion problem in flash memories, we study coding schemes to solve both the endurance and the security issues in flash memories. This work presents a series of information theory and coding theory research studies on the aforesaid three critical issues, and shows that how coding theory can be utilized to address these challenges
    corecore