600 research outputs found
Envisioning the Future of Cyber Security in Post-Quantum Era: A Survey on PQ Standardization, Applications, Challenges and Opportunities
The rise of quantum computers exposes vulnerabilities in current public key
cryptographic protocols, necessitating the development of secure post-quantum
(PQ) schemes. Hence, we conduct a comprehensive study on various PQ approaches,
covering the constructional design, structural vulnerabilities, and offer
security assessments, implementation evaluations, and a particular focus on
side-channel attacks. We analyze global standardization processes, evaluate
their metrics in relation to real-world applications, and primarily focus on
standardized PQ schemes, selected additional signature competition candidates,
and PQ-secure cutting-edge schemes beyond standardization. Finally, we present
visions and potential future directions for a seamless transition to the PQ
era
Providing Private and Fast Data Access for Cloud Systems
Cloud storage and computing systems have become the backbone of many applications such as streaming (Netflix, YouTube), storage (Dropbox, Google Drive), and computing (Amazon Elastic Computing, Microsoft Azure). To address the ever growing demand for storage and computing requirements of these applications, cloud services are typically im-plemented over a large-scale distributed data storage system. Cloud systems are expected to provide the following two pivotal services for the users: 1) private content access and 2) fast content access. The goal of this thesis is to understand and address some of the challenges that need to be overcome to provide these two services.
The first part of this thesis focuses on private data access in distributed systems. In particular, we contribute to the areas of Private Information Retrieval (PIR) and Private Computation (PC). In the PIR problem, there is a user who wishes to privately retrieve a subset of files belonging to a database stored on a single or multiple remote server(s). In the PC problem, the user wants to privately compute functions of a subset of files in the database. The PIR and PC problems seek the most efficient solutions with the minimum download cost that enable the user to retrieve or compute what it wants privately.
We establish fundamental bounds on the minimum download cost required for guaran-teeing the privacy requirement in some practical and realistic settings of the PIR and PC problems and develop novel and efficient privacy-preserving algorithms for these settings. In particular, we study the single-server and multi-server settings of PIR in which the user initially has a random linear combination of a subset of files in the database as side in-formation, referred to as PIR with coded side information. We also study the multi-server setting of the PC in which the user wants to privately compute multiple linear combinations of a subset of files in the database, referred to as Private Linear Transformation.
The second part of this thesis focuses on fast content access in distributed systems. In particular, we study the use of erasure coding to handle data access requests in distributed storage and computing systems. Service rate region is an important performance metric for coded distributed systems, which expresses the set of all data access request rates that can be simultaneously served by the system. In this context, two classes of problems arise: 1) characterizing the service rate region of a given storage scheme and finding the optimal request allocation, and 2) designing the underlying erasure code to handle a given desired service rate region.
As contributions along the first class of problems, we characterize the service rate region of systems with some common coding schemes such as Simplex codes and Reed-Muller codes by introducing two novel techniques: 1) fractional matching and vertex cover on graph representation of codes, and 2) geometric representations of codes. Moreover, along the second class of code design, we establish some lower bounds on the minimum storage required to handle a desired service rate region for a coded distributed system and in some regimes, we design efficient storage schemes that provide the desired service rate region while minimizing the storage requirements
Theoretical analysis of decoding failure rate of non-binary QC-MDPC codes
In this paper, we study the decoding failure rate (DFR) of non-binary QC-MDPC codes using theoretical tools, extending the results of previous binary QC-MDPC code studies. The theoretical estimates of the DFR are particularly significant for cryptographic applications of QC-MDPC codes. Specifically, in the binary case, it is established that exploiting decoding failures makes it possible to recover the secret key of a QC-MDPC cryptosystem. This implies that to attain the desired security level against adversaries in the CCA2 model, the decoding failure rate must be strictly upper-bounded to be negligibly small. In this paper, we observe that this attack can also be extended to the non--binary case as well, which underscores the importance of DFR estimation. Consequently, we study the guaranteed error-correction capability of non-binary QC-MDPC codes under one-step majority logic (OSML) decoder and provide a theoretical analysis of the 1-iteration parallel symbol flipping decoder and its combination with OSML decoder. Utilizing these results, we estimate the potential public-key sizes for QC-MDPC cryptosystems over for various security levels. We find that there is no advantage in reducing key sizes when compared to the binary case
The XP Stabilizer Formalism
Quantum computers are expected to have advantages over classical computers in solving a range
of high impact problems, but they are highly susceptible to errors due to environmental noise.
The Pauli Stabiliser formalism generalises classical error-correction methods and makes use of
quantum error correction codes to protect quantum information. In this thesis, we introduce
the XP stabiliser formalism, which is a generalisation of the Pauli stabiliser formalism with a
number of useful applications.
Quantum algorithms are typically written in terms of quantum circuits which involve a
series of unitary gates followed by measurements which form the output of the computation.
To implement quantum algorithms reliably, we need to perform unitary gates fault-tolerantly so that errors do not propagate in an uncontrolled way.
Transversal logical operators are one way of applying unitary gates fault-tolerantly on Pauli
stabiliser codes. Identifying transversal logical operators for a given Pauli stabiliser code is
challenging, and existing methods have exponential complexity in one or more of the parameters
of the code. Making use of the XP formalism, we present efficient algorithms which identify
all transversal logical operators that are diagonal in the computational basis for any Pauli
stabiliser code. We also show how to construct codes with a transversal implementation of any
desired diagonal logical operator.
The Pauli stabiliser formalism can also be used to efficiently represent certain quantum
states, but many states of interest lie outside the formalism. In the XP formalism, a wider
range of states can be represented than in the Pauli stabiliser formalism, including hypergraph
states which have interesting non-local properties. The braiding of non-Abelian anyons is a
proposed pathway to universal fault-tolerant quantum computation. Certain XP stabiliser
codes are known to harbour non-Abelian anyons, and can be studied within the new formalism
Efficient Algorithms for Constructing Minimum-Weight Codewords in Some Extended Binary BCH Codes
We present algorithms for specifying the support of minimum-weight
words of extended binary BCH codes of length and designed distance
for some values of , where may
grow to infinity. The support is specified as the sum of two sets: a set of
elements, and a subspace of dimension , specified by
a basis.
In some detail, for designed distance , we have a deterministic
algorithm for even , and a probabilistic algorithm with success
probability for odd . For designed distance ,
we have a probabilistic algorithm with success probability for even . Finally, for designed distance , we have a deterministic algorithm for divisible by . We also
present a construction via Gold functions when .
Our construction builds on results of Kasami and Lin (IEEE T-IT, 1972), who
proved that for extended binary BCH codes of designed distance , the
minimum distance equals the designed distance. Their proof makes use of a
non-constructive result of Berlekamp (Inform. Contrl., 1970), and a
constructive ``down-conversion theorem'' that converts some words in BCH codes
to lower-weight words in BCH codes of lower designed distance. Our main
contribution is in replacing the non-constructive argument of Berlekamp by a
low-complexity algorithm.
In one aspect, we extends the results of Grigorescu and Kaufman (IEEE T-IT,
2012), who presented explicit minimum-weight words for designed distance
(and hence also for designed distance , by a well-known
``up-conversion theorem''), as we cover more cases of the minimum distance.
However, the minimum-weight words we construct are not affine generators for
designed distance
MWS and FWS Codes for Coordinate-Wise Weight Functions
A combinatorial problem concerning the maximum size of the (hamming) weight
set of an linear code was recently introduced. Codes attaining the
established upper bound are the Maximum Weight Spectrum (MWS) codes. Those
codes with the same weight set as are called Full
Weight Spectrum (FWS) codes. FWS codes are necessarily ``short", whereas MWS
codes are necessarily ``long". For fixed the values of for which
an -FWS code exists are completely determined, but the determination
of the minimum length of an -MWS code remains an open
problem. The current work broadens discussion first to general coordinate-wise
weight functions, and then specifically to the Lee weight and a Manhattan like
weight. In the general case we provide bounds on for which an FWS code
exists, and bounds on for which an MWS code exists. When specializing to
the Lee or to the Manhattan setting we are able to completely determine the
parameters of FWS codes. As with the Hamming case, we are able to provide an
upper bound on (the minimum length of Lee MWS codes),
and pose the determination of as an open problem. On the
other hand, with respect to the Manhattan weight we completely determine the
parameters of MWS codes.Comment: 17 page
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
Spherical and Hyperbolic Toric Topology-Based Codes On Graph Embedding for Ising MRF Models: Classical and Quantum Topology Machine Learning
The paper introduces the application of information geometry to describe the
ground states of Ising models by utilizing parity-check matrices of cyclic and
quasi-cyclic codes on toric and spherical topologies. The approach establishes
a connection between machine learning and error-correcting coding. This
proposed approach has implications for the development of new embedding methods
based on trapping sets. Statistical physics and number geometry applied for
optimize error-correcting codes, leading to these embedding and sparse
factorization methods. The paper establishes a direct connection between DNN
architecture and error-correcting coding by demonstrating how state-of-the-art
architectures (ChordMixer, Mega, Mega-chunk, CDIL, ...) from the long-range
arena can be equivalent to of block and convolutional LDPC codes (Cage-graph,
Repeat Accumulate). QC codes correspond to certain types of chemical elements,
with the carbon element being represented by the mixed automorphism
Shu-Lin-Fossorier QC-LDPC code. The connections between Belief Propagation and
the Permanent, Bethe-Permanent, Nishimori Temperature, and Bethe-Hessian Matrix
are elaborated upon in detail. The Quantum Approximate Optimization Algorithm
(QAOA) used in the Sherrington-Kirkpatrick Ising model can be seen as analogous
to the back-propagation loss function landscape in training DNNs. This
similarity creates a comparable problem with TS pseudo-codeword, resembling the
belief propagation method. Additionally, the layer depth in QAOA correlates to
the number of decoding belief propagation iterations in the Wiberg decoding
tree. Overall, this work has the potential to advance multiple fields, from
Information Theory, DNN architecture design (sparse and structured prior graph
topology), efficient hardware design for Quantum and Classical DPU/TPU (graph,
quantize and shift register architect.) to Materials Science and beyond.Comment: 71 pages, 42 Figures, 1 Table, 1 Appendix. arXiv admin note: text
overlap with arXiv:2109.08184 by other author
ProductAE: Toward Deep Learning Driven Error-Correction Codes of Large Dimensions
While decades of theoretical research have led to the invention of several
classes of error-correction codes, the design of such codes is an extremely
challenging task, mostly driven by human ingenuity. Recent studies demonstrate
that such designs can be effectively automated and accelerated via tools from
machine learning (ML), thus enabling ML-driven classes of error-correction
codes with promising performance gains compared to classical designs. A
fundamental challenge, however, is that it is prohibitively complex, if not
impossible, to design and train fully ML-driven encoder and decoder pairs for
large code dimensions. In this paper, we propose Product Autoencoder
(ProductAE) -- a computationally-efficient family of deep learning driven
(encoder, decoder) pairs -- aimed at enabling the training of relatively large
codes (both encoder and decoder) with a manageable training complexity. We
build upon ideas from classical product codes and propose constructing large
neural codes using smaller code components. ProductAE boils down the complex
problem of training the encoder and decoder for a large code dimension and
blocklength to less-complex sub-problems of training encoders and decoders
for smaller dimensions and blocklengths. Our training results show successful
training of ProductAEs of dimensions as large as bits with meaningful
performance gains compared to state-of-the-art classical and neural designs.
Moreover, we demonstrate excellent robustness and adaptivity of ProductAEs to
channel models different than the ones used for training.Comment: arXiv admin note: text overlap with arXiv:2110.0446
- …