8 research outputs found
Coding for Memory with Stuck-at Defects
In this paper, we propose an encoding scheme for partitioned linear block
codes (PLBC) which mask the stuck-at defects in memories. In addition, we
derive an upper bound and the estimate of the probability that masking fails.
Numerical results show that PLBC can efficiently mask the defects with the
proposed encoding scheme. Also, we show that our upper bound is very tight by
using numerical results.Comment: 6 pages, 5 figures, IEEE International Conference on Communications
(ICC), Jun. 201
Redundancy Allocation of Partitioned Linear Block Codes
Most memories suffer from both permanent defects and intermittent random
errors. The partitioned linear block codes (PLBC) were proposed by Heegard to
efficiently mask stuck-at defects and correct random errors. The PLBC have two
separate redundancy parts for defects and random errors. In this paper, we
investigate the allocation of redundancy between these two parts. The optimal
redundancy allocation will be investigated using simulations and the simulation
results show that the PLBC can significantly reduce the probability of decoding
failure in memory with defects. In addition, we will derive the upper bound on
the probability of decoding failure of PLBC and estimate the optimal redundancy
allocation using this upper bound. The estimated redundancy allocation matches
the optimal redundancy allocation well.Comment: 5 pages, 2 figures, to appear in IEEE International Symposium on
Information Theory (ISIT), Jul. 201
Coding scheme for 3D vertical flash memory
Recently introduced 3D vertical flash memory is expected to be a disruptive
technology since it overcomes scaling challenges of conventional 2D planar
flash memory by stacking up cells in the vertical direction. However, 3D
vertical flash memory suffers from a new problem known as fast detrapping,
which is a rapid charge loss problem. In this paper, we propose a scheme to
compensate the effect of fast detrapping by intentional inter-cell interference
(ICI). In order to properly control the intentional ICI, our scheme relies on a
coding technique that incorporates the side information of fast detrapping
during the encoding stage. This technique is closely connected to the
well-known problem of coding in a memory with defective cells. Numerical
results show that the proposed scheme can effectively address the problem of
fast detrapping.Comment: 7 pages, 9 figures. accepted to ICC 2015. arXiv admin note: text
overlap with arXiv:1410.177
Signal Processing for Caching Networks and Non-volatile Memories
The recent information explosion has created a pressing need for faster and more reliable data storage and transmission schemes. This thesis focuses on two systems: caching networks and non-volatile storage systems. It proposes network protocols to improve the efficiency of information delivery and signal processing schemes to reduce errors at the physical layer as well. This thesis first investigates caching and delivery strategies for content delivery networks. Caching has been investigated as a useful technique to reduce the network burden by prefetching some contents during o˙-peak hours. Coded caching [1] proposed by Maddah-Ali and Niesen is the foundation of our algorithms and it has been shown to be a useful technique which can reduce peak traffic rates by encoding transmissions so that different users can extract different information from the same packet. Content delivery networks store information distributed across multiple servers, so as to balance the load and avoid unrecoverable losses in case of node or disk failures. On one hand, distributed storage limits the capability of combining content from different servers into a single message, causing performance losses in coded caching schemes. But, on the other hand, the inherent redundancy existing in distributed storage systems can be used to improve the performance of those schemes through parallelism. This thesis proposes a scheme combining distributed storage of the content in multiple servers and an efficient coded caching algorithm for delivery to the users. This scheme is shown to reduce the peak transmission rate below that of state-of-the-art algorithms
Dekodovanje kodova sa malom gustinom provera parnosti u prisustvu grešaka u logičkim kolima
Sve ve´ca integracija poluprovodniˇckih tehnologija, varijacije nastale usled nesavršenosti procesa
proizvodnje, kao zahtevi za smanjenjem napona napajanja cˇine elektronske ured¯aje inherentno
nepouzdanim. Agresivno skaliranje napona smanjuje otpornost na šum i dovodi do nepouzdanog
rada ured¯aja. Široko je prihvac´ena paradigma prema kojoj se naredne generacije digitalnih
elektronskih ured¯aja moraju opremiti logikom za korekciju hardverskih grešaka...Due to huge density integration increase, lower supply voltages, and variations in technological
process, complementary metal-oxide-semiconductor (CMOS) and emerging nanoelectronic devices
are inherently unreliable. Moreover, the demands for energy efficiency require reduction
of energy consumption by several orders of magnitude, which can be done only by aggressive
supply voltage scaling. Consequently, the signal levels are much lower and closer to the noise
level, which reduces the component noise immunity and leads to unreliable behavior. It is
widely accepted that future generations of circuits and systems must be designed to deal with
unreliable components..