12 research outputs found
Efficient Multi-Point Local Decoding of Reed-Muller Codes via Interleaved Codex
Reed-Muller codes are among the most important classes of locally correctable
codes. Currently local decoding of Reed-Muller codes is based on decoding on
lines or quadratic curves to recover one single coordinate. To recover multiple
coordinates simultaneously, the naive way is to repeat the local decoding for
recovery of a single coordinate. This decoding algorithm might be more
expensive, i.e., require higher query complexity. In this paper, we focus on
Reed-Muller codes with usual parameter regime, namely, the total degree of
evaluation polynomials is , where is the code alphabet size
(in fact, can be as big as in our setting). By introducing a novel
variation of codex, i.e., interleaved codex (the concept of codex has been used
for arithmetic secret sharing \cite{C11,CCX12}), we are able to locally recover
arbitrarily large number of coordinates of a Reed-Muller code
simultaneously at the cost of querying coordinates. It turns out that
our local decoding of Reed-Muller codes shows ({\it perhaps surprisingly}) that
accessing locations is in fact cheaper than repeating the procedure for
accessing a single location for times. Our estimation of success error
probability is based on error probability bound for -wise linearly
independent variables given in \cite{BR94}
Efficient multi-point local decoding of Reed-Muller codes via interleaved codex
Reed-Muller codes are among the most important classes of locally correctable codes. Currently local decoding of Reed-Muller codes is based on decoding on lines or quadratic curves to recover one single coordinate. To recover multiple coordinates simultaneously, the naive way is to repeat the local decoding for recovery of a single coordinate. This decoding algorithm might be more expensive, i.e., require higher query complexity. In this paper, we focus on Reed-Muller codes with usual parameter regime, namely, the total degree of evaluation polynomials is d=Θ {q), where q is the code alphabet size (in fact, d can be as big as q/4 in our setting). By introducing a novel variation of codex, i.e., interleaved codex (the concept of codex has been used for arithmetic secret sharing), we are able to locally recover arbitrarily large number k of coordinates of a Reed-Muller code simultaneously with error probability exp (-Ω (k)) at the cost of querying merely O(q2k) coordinates. It turns out that our local decoding of Reed-Muller codes shows (perhaps surprisingly) that accessing k locations is in fact cheaper than repeating the procedure for accessing a single location for k times. Precisely speaking, to get the same success probability by repeating the local decoding algorithm of a single coordinate, one has to query Ω (qk2) coordinates. Thus, the query complexity of our local decoding is smaller for k=Ω (q). If we impose the same query complexity constraint on both algorithm, our local decoding algorithm yields smaller error probability when k=Ω (qq). In addition, our local decoding is efficient, i.e., the decoding complexity is Poly(k,q). Construction of an interleaved codex is based on concatenation of a codex with a multiplication friendly pair, while the main tool to realize codex is based on algebraic function fields (or more precisely, algebraic geometry codes)
Variable Rate Transmission Over Noisy Channels
Hybrid automatic repeat request transmission (hybrid ARQ) schemes aim to provide
system reliability for transmissions over noisy channels while still maintaining a reasonably
high throughput efficiency by combining retransmissions of automatic repeat
requests with forward error correction (FEC) coding methods. In type-II hybrid ARQ
schemes, the additional parity information required by channel codes to achieve forward
error correction is provided only when errors have been detected. Hence, the
available bits are partitioned into segments, some of which are sent to the receiver immediately,
others are held back and only transmitted upon the detection of errors. This
scheme raises two questions. Firstly, how should the available bits be ordered for optimal
partitioning into consecutive segments? Secondly, how large should the individual
segments be?
This thesis aims to provide an answer to both of these questions for the transmission
of convolutional and Turbo Codes over additive white Gaussian noise (AWGN),
inter-symbol interference (ISI) and Rayleigh channels. Firstly, the ordering of bits is
investigated by simulating the transmission of packets split into segments with a size of
1 bit and finding the critical number of bits, i.e. the number of bits where the output of
the decoder is error-free. This approach provides a maximum, practical performance
limit over a range of signal-to-noise levels. With these practical performance limits, the
attention is turned to the size of the individual segments, since packets of 1 bit cause
an intolerable overhead and delay. An adaptive, hybrid ARQ system is investigated,
in which the transmitter uses the number of bits sent to the receiver and the receiver
decoding results to adjust the size of the first, initial, packet and subsequent segments
to the conditions of a stationary channel
Trade-off analysis of modes of data handling for earth resources (ERS), volume 1
Data handling requirements are reviewed for earth observation missions along with likely technology advances. Parametric techniques for synthesizing potential systems are developed. Major tasks include: (1) review of the sensors under development and extensions of or improvements in these sensors; (2) development of mission models for missions spanning land, ocean, and atmosphere observations; (3) summary of data handling requirements including the frequency of coverage, timeliness of dissemination, and geographic relationships between points of collection and points of dissemination; (4) review of data routing to establish ways of getting data from the collection point to the user; (5) on-board data processing; (6) communications link; and (7) ground data processing. A detailed synthesis of three specific missions is included
Purposive variation in recordkeeping in the academic molecular biology laboratory
This thesis presents an investigation into the role played by laboratory records in the disciplinary discourse of academic molecular biology laboratories.
The motivation behind this study stems from two areas of concern. Firstly, the laboratory record has received comparatively little attention as a linguistic genre in spite of its central role in the daily work of laboratory scientists. Secondly, laboratory records have become a focus for technologically driven change through the advent of computing systems that aim to support a transition away from the traditional paper-based approach towards electronic recordkeeping. Electronic recordkeeping raises the potential for increased sharing of laboratory records across laboratory communities. However, the uptake of electronic laboratory notebooks has been, and remains, markedly low in academic laboratories.
The investigation employs a multi-perspective research framework combining ethnography, genre analysis, and reading protocol analysis in order to evaluate both the organizational practices and linguistic practices at work in laboratory recordkeeping, and to examine these practices from the viewpoints of both producers and consumers of laboratory records. Particular emphasis is placed on assessing variation in the practices used by different scientists when keeping laboratory records, and on assessing the types of articulation work used to achieve mutual intelligibility across laboratory members.
The findings of this investigation indicate that the dominant viewpoint held by laboratory staff other than principal investigators conceptualized laboratory records as a personal resource rather than a community archive. Readers other than the original author relied almost exclusively on the recontextualization of selected information from laboratory records into ‘public genres’ such as laboratory talks, research articles, and progress reports as the preferred means of accessing the information held in the records. The consistent use of summarized forms of recording experimental data rendered most laboratory records as both unreliable and of limited usability in the records management sense that they did not form full and accurate descriptions that could support future organizational activities.
These findings offer a counterpoint to other studies, notably a number of studies undertaken as part of technology developments for electronic recordkeeping, that report sharing of laboratory records or assume a ‘cyberbolic’ view of laboratory records as a shared resource
Ray Tracing Gems
This book is a must-have for anyone serious about rendering in real time. With the announcement of new ray tracing APIs and hardware to support them, developers can easily create real-time applications with ray tracing as a core component. As ray tracing on the GPU becomes faster, it will play a more central role in real-time rendering. Ray Tracing Gems provides key building blocks for developers of games, architectural applications, visualizations, and more. Experts in rendering share their knowledge by explaining everything from nitty-gritty techniques that will improve any ray tracer to mastery of the new capabilities of current and future hardware. What you'll learn: The latest ray tracing techniques for developing real-time applications in multiple domains Guidance, advice, and best practices for rendering applications with Microsoft DirectX Raytracing (DXR) How to implement high-performance graphics for interactive visualizations, games, simulations, and more Who this book is for: Developers who are looking to leverage the latest APIs and GPU technology for real-time rendering and ray tracing Students looking to learn about best practices in these areas Enthusiasts who want to understand and experiment with their new GPU
Esprit '91. Proceedings of the annual Esprit conference. Brussels, 25-29 November 1991. EUR 13853 EN
Play Among Books
How does coding change the way we think about architecture? Miro Roman and his AI Alice_ch3n81 develop a playful scenario in which they propose coding as the new literacy of information. They convey knowledge in the form of a project model that links the fields of architecture and information through two interwoven narrative strands in an “infinite flow” of real books