2,408 research outputs found
Improved construction of irregular progressive edge-growth Tanner graphs
The progressive edge-growth algorithm is a well-known procedure to construct
regular and irregular low-density parity-check codes. In this paper, we propose
a modification of the original algorithm that improves the performance of these
codes in the waterfall region when constructing codes complying with both,
check and symbol node degree distributions. The proposed algorithm is thus
interesting if a family of irregular codes with a complex check node degree
distribution is used.Comment: 3 pages, 3 figure
Blind Reconciliation
Information reconciliation is a crucial procedure in the classical
post-processing of quantum key distribution (QKD). Poor reconciliation
efficiency, revealing more information than strictly needed, may compromise the
maximum attainable distance, while poor performance of the algorithm limits the
practical throughput in a QKD device. Historically, reconciliation has been
mainly done using close to minimal information disclosure but heavily
interactive procedures, like Cascade, or using less efficient but also less
interactive -just one message is exchanged- procedures, like the ones based in
low-density parity-check (LDPC) codes. The price to pay in the LDPC case is
that good efficiency is only attained for very long codes and in a very narrow
range centered around the quantum bit error rate (QBER) that the code was
designed to reconcile, thus forcing to have several codes if a broad range of
QBER needs to be catered for. Real world implementations of these methods are
thus very demanding, either on computational or communication resources or
both, to the extent that the last generation of GHz clocked QKD systems are
finding a bottleneck in the classical part. In order to produce compact, high
performance and reliable QKD systems it would be highly desirable to remove
these problems. Here we analyse the use of short-length LDPC codes in the
information reconciliation context using a low interactivity, blind, protocol
that avoids an a priori error rate estimation. We demonstrate that 2x10^3 bits
length LDPC codes are suitable for blind reconciliation. Such codes are of high
interest in practice, since they can be used for hardware implementations with
very high throughput.Comment: 22 pages, 8 figure
Untainted Puncturing for Irregular Low-Density Parity-Check Codes
Puncturing is a well-known coding technique widely used for constructing
rate-compatible codes. In this paper, we consider the problem of puncturing
low-density parity-check codes and propose a new algorithm for intentional
puncturing. The algorithm is based on the puncturing of untainted symbols, i.e.
nodes with no punctured symbols within their neighboring set. It is shown that
the algorithm proposed here performs better than previous proposals for a range
of coding rates and short proportions of punctured symbols.Comment: 4 pages, 3 figure
Rate Compatible Protocol for Information Reconciliation: An application to QKD
Information Reconciliation is a mechanism that allows to weed out the
discrepancies between two correlated variables. It is an essential component in
every key agreement protocol where the key has to be transmitted through a
noisy channel. The typical case is in the satellite scenario described by
Maurer in the early 90's. Recently the need has arisen in relation with Quantum
Key Distribution (QKD) protocols, where it is very important not to reveal
unnecessary information in order to maximize the shared key length. In this
paper we present an information reconciliation protocol based on a rate
compatible construction of Low Density Parity Check codes. Our protocol
improves the efficiency of the reconciliation for the whole range of error
rates in the discrete variable QKD context. Its adaptability together with its
low interactivity makes it specially well suited for QKD reconciliation
Demystifying the Information Reconciliation Protocol Cascade
Cascade is an information reconciliation protocol proposed in the context of
secret key agreement in quantum cryptography. This protocol allows removing
discrepancies in two partially correlated sequences that belong to distant
parties, connected through a public noiseless channel. It is highly
interactive, thus requiring a large number of channel communications between
the parties to proceed and, although its efficiency is not optimal, it has
become the de-facto standard for practical implementations of information
reconciliation in quantum key distribution. The aim of this work is to analyze
the performance of Cascade, to discuss its strengths, weaknesses and
optimization possibilities, comparing with some of the modified versions that
have been proposed in the literature. When looking at all design trade-offs, a
new view emerges that allows to put forward a number of guidelines and propose
near optimal parameters for the practical implementation of Cascade improving
performance significantly in comparison with all previous proposals.Comment: 30 pages, 13 figures, 3 table
Entanglement Distribution in Optical Networks
The ability to generate entangled photon-pairs over a broad wavelength range
opens the door to the simultaneous distribution of entanglement to multiple
users in a network by using centralized sources and flexible
wavelength-division multiplexing schemes. Here we show the design of a
metropolitan optical network consisting of tree-type access networks whereby
entangled photon-pairs are distributed to any pair of users, independent of
their location. The network is constructed employing commercial off-the-shelf
components and uses the existing infrastructure, which allows for moderate
deployment costs. We further develop a channel plan and a network-architecture
design to provide a direct optical path between any pair of users, thus
allowing classical and one-way quantum communication as well as entanglement
distribution. This allows the simultaneous operation of multiple quantum
information technologies. Finally, we present a more flexible backbone
architecture that pushes away the load limitations of the original network
design by extending its reach, number of users and capabilities.Comment: 26 pages, 12 figure
Power efficient job scheduling by predicting the impact of processor manufacturing variability
Modern CPUs suffer from performance and power consumption variability due to the manufacturing process. As a result, systems that do not consider such variability caused by manufacturing issues lead to performance degradations and wasted power. In order to avoid such negative impact, users and system administrators must actively counteract any manufacturing variability.
In this work we show that parallel systems benefit from taking into account the consequences of manufacturing variability when making scheduling decisions at the job scheduler level. We also show that it is possible to predict the impact of this variability on specific applications by using variability-aware power prediction models. Based on these power models, we propose two job scheduling policies that consider the effects of manufacturing variability for each application and that ensure that power consumption stays under a system-wide power budget. We evaluate our policies under different power budgets and traffic scenarios, consisting of both single- and multi-node parallel applications, utilizing up to 4096 cores in total. We demonstrate that they decrease job turnaround time, compared to contemporary scheduling policies used on production clusters, up to 31% while saving up to 5.5% energy.Postprint (author's final draft
Quantum Metropolitan Optical Network based on Wavelength Division Multiplexing
Quantum Key Distribution (QKD) is maturing quickly. However, the current
approaches to its application in optical networks make it an expensive
technology. QKD networks deployed to date are designed as a collection of
point-to-point, dedicated QKD links where non-neighboring nodes communicate
using the trusted repeater paradigm. We propose a novel optical network model
in which QKD systems share the communication infrastructure by wavelength
multiplexing their quantum and classical signals. The routing is done using
optical components within a metropolitan area which allows for a dynamically
any-to-any communication scheme. Moreover, it resembles a commercial telecom
network, takes advantage of existing infrastructure and utilizes commercial
components, allowing for an easy, cost-effective and reliable deployment.Comment: 23 pages, 8 figure
Runtime-guided mitigation of manufacturing variability in power-constrained multi-socket NUMA nodes
This work has been supported by the Spanish Government (Severo Ochoa grants SEV2015-0493, SEV-2011-00067), by
the Spanish Ministry of Science and Innovation (contracts TIN2015-65316-P), by Generalitat de Catalunya (contracts 2014-SGR-1051 and 2014-SGR-1272), by the RoMoL ERC Advanced Grant (GA 321253) and the European HiPEAC Network of Excellence. M. Moretó has been partially supported by the Ministry of Economy and Competitiveness under Juan de la Cierva postdoctoral fellowship number JCI-2012-15047. M. Casas is supported by the Secretary for Universities and Research of the Ministry of Economy and Knowledge of the Government of Catalonia and the Cofund
programme of the Marie Curie Actions of the 7th R&D Framework Programme of the European Union (Contract 2013 BP B 00243). This work was also partially performed
under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 (LLNL-CONF-689878).
Finally, the authors are grateful to the reviewers for their valuable comments, to the RoMoL team, to Xavier Teruel and Kallia Chronaki from the Programming Models group
of BSC and the Computation Department of LLNL for their technical support and useful feedback.Peer ReviewedPostprint (published version
- …