62,556 research outputs found
Alpha Entanglement Codes: Practical Erasure Codes to Archive Data in Unreliable Environments
Data centres that use consumer-grade disks drives and distributed
peer-to-peer systems are unreliable environments to archive data without enough
redundancy. Most redundancy schemes are not completely effective for providing
high availability, durability and integrity in the long-term. We propose alpha
entanglement codes, a mechanism that creates a virtual layer of highly
interconnected storage devices to propagate redundant information across a
large scale storage system. Our motivation is to design flexible and practical
erasure codes with high fault-tolerance to improve data durability and
availability even in catastrophic scenarios. By flexible and practical, we mean
code settings that can be adapted to future requirements and practical
implementations with reasonable trade-offs between security, resource usage and
performance. The codes have three parameters. Alpha increases storage overhead
linearly but increases the possible paths to recover data exponentially. Two
other parameters increase fault-tolerance even further without the need of
additional storage. As a result, an entangled storage system can provide high
availability, durability and offer additional integrity: it is more difficult
to modify data undetectably. We evaluate how several redundancy schemes perform
in unreliable environments and show that alpha entanglement codes are flexible
and practical codes. Remarkably, they excel at code locality, hence, they
reduce repair costs and become less dependent on storage locations with poor
availability. Our solution outperforms Reed-Solomon codes in many disaster
recovery scenarios.Comment: The publication has 12 pages and 13 figures. This work was partially
supported by Swiss National Science Foundation SNSF Doc.Mobility 162014, 2018
48th Annual IEEE/IFIP International Conference on Dependable Systems and
Networks (DSN
On the origin of the mitochondrial genetic code: Towards a unified mathematical framework for the management of genetic information
The origin of the genetic code represents one of the most challenging problems in molecular evolution. The genetic code is an important universal feature of extant organisms and indicates a common ancestry of different forms of life on earth. Known variants of the genetic code can be mainly divided in mitochondrial and nuclear classes. Here we provide a new insight on the origin of the mitochondrial genetic code: we found that its degeneracy distribution can be explained by using a mathematical approach recently developed for the description of the Euplotes nuclear variant of the genetic code. The results point to a primeval mitochondrial genetic code composed of four base codons, which we call tesserae, that, among other features, exhibit outstanding error detection capabilities. The theoretical description suggests also a formulation of a plausible biological theory about the origin of protein coding. Such theory is based on the symmetry properties of hypothetical primeval chemical adaptors between nucleic acids and amino acids (ancient tRNA’s). Our paper provides a unified mathematical framework for different hypotheses on the origin of genetic coding. Also, it contributes to revisit our present view about the evolutionary steps that led to extant genetic codes by giving a new first-principles perspective on the difficult problem of the origin of the genetic code, and consequently, on the origin of life on earth
The Self-Organization of Meaning and the Reflexive Communication of Information
Following a suggestion of Warren Weaver, we extend the Shannon model of
communication piecemeal into a complex systems model in which communication is
differentiated both vertically and horizontally. This model enables us to
bridge the divide between Niklas Luhmann's theory of the self-organization of
meaning in communications and empirical research using information theory.
First, we distinguish between communication relations and correlations among
patterns of relations. The correlations span a vector space in which relations
are positioned and can be provided with meaning. Second, positions provide
reflexive perspectives. Whereas the different meanings are integrated locally,
each instantiation opens global perspectives--"horizons of meaning"--along
eigenvectors of the communication matrix. These next-order codifications of
meaning can be expected to generate redundancies when interacting in
instantiations. Increases in redundancy indicate new options and can be
measured as local reduction of prevailing uncertainty (in bits). The systemic
generation of new options can be considered as a hallmark of the
knowledge-based economy.Comment: accepted for publication in Social Science Information, March 21,
201
Mitigating smart card fault injection with link-time code rewriting: a feasibility study
We present a feasibility study to protect smart card software against fault-injection attacks by means of binary code rewriting. We implemented a range of protection techniques in a link-time rewriter and evaluate and discuss the obtained coverage, the associated overhead and engineering effort, as well as its practical usability
Criticality Aware Soft Error Mitigation in the Configuration Memory of SRAM based FPGA
Efficient low complexity error correcting code(ECC) is considered as an
effective technique for mitigation of multi-bit upset (MBU) in the
configuration memory(CM)of static random access memory (SRAM) based Field
Programmable Gate Array (FPGA) devices. Traditional multi-bit ECCs have large
overhead and complex decoding circuit to correct adjacent multibit error. In
this work, we propose a simple multi-bit ECC which uses Secure Hash Algorithm
for error detection and parity based two dimensional Erasure Product Code for
error correction. Present error mitigation techniques perform error correction
in the CM without considering the criticality or the execution period of the
tasks allocated in different portion of CM. In most of the cases, error
correction is not done in the right instant, which sometimes either suspends
normal system operation or wastes hardware resources for less critical tasks.
In this paper,we advocate for a dynamic priority-based hardware scheduling
algorithm which chooses the tasks for error correction based on their area,
execution period and criticality. The proposed method has been validated in
terms of overhead due to redundant bits, error correction time and system
reliabilityComment: 6 pages, 8 figures, conferenc
- …