9,079 research outputs found
High rate locally-correctable and locally-testable codes with sub-polynomial query complexity
In this work, we construct the first locally-correctable codes (LCCs), and
locally-testable codes (LTCs) with constant rate, constant relative distance,
and sub-polynomial query complexity. Specifically, we show that there exist
binary LCCs and LTCs with block length , constant rate (which can even be
taken arbitrarily close to 1), constant relative distance, and query complexity
. Previously such codes were known to exist
only with query complexity (for constant ), and
there were several, quite different, constructions known.
Our codes are based on a general distance-amplification method of Alon and
Luby~\cite{AL96_codes}. We show that this method interacts well with local
correctors and testers, and obtain our main results by applying it to suitably
constructed LCCs and LTCs in the non-standard regime of \emph{sub-constant
relative distance}.
Along the way, we also construct LCCs and LTCs over large alphabets, with the
same query complexity , which additionally have
the property of approaching the Singleton bound: they have almost the
best-possible relationship between their rate and distance. This has the
surprising consequence that asking for a large alphabet error-correcting code
to further be an LCC or LTC with query
complexity does not require any sacrifice in terms of rate and distance! Such a
result was previously not known for any query complexity.
Our results on LCCs also immediately give locally-decodable codes (LDCs) with
the same parameters
List Decoding Tensor Products and Interleaved Codes
We design the first efficient algorithms and prove new combinatorial bounds
for list decoding tensor products of codes and interleaved codes. We show that
for {\em every} code, the ratio of its list decoding radius to its minimum
distance stays unchanged under the tensor product operation (rather than
squaring, as one might expect). This gives the first efficient list decoders
and new combinatorial bounds for some natural codes including multivariate
polynomials where the degree in each variable is bounded. We show that for {\em
every} code, its list decoding radius remains unchanged under -wise
interleaving for an integer . This generalizes a recent result of Dinur et
al \cite{DGKS}, who proved such a result for interleaved Hadamard codes
(equivalently, linear transformations). Using the notion of generalized Hamming
weights, we give better list size bounds for {\em both} tensoring and
interleaving of binary linear codes. By analyzing the weight distribution of
these codes, we reduce the task of bounding the list size to bounding the
number of close-by low-rank codewords. For decoding linear transformations,
using rank-reduction together with other ideas, we obtain list size bounds that
are tight over small fields.Comment: 32 page
On 3-D inelastic analysis methods for hot section components (base program)
A 3-D Inelastic Analysis Method program is described. This program consists of a series of new computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of: (1) combustor liners, (2) turbine blades, and (3) turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain)and global (dynamics, buckling) structural behavior of the three selected components. Three computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (Marc-Hot Section Technology), and BEST (Boundary Element Stress Technology), have been developed and are briefly described in this report
Lagrange Coded Computing: Optimal Design for Resiliency, Security and Privacy
We consider a scenario involving computations over a massive dataset stored
distributedly across multiple workers, which is at the core of distributed
learning algorithms. We propose Lagrange Coded Computing (LCC), a new framework
to simultaneously provide (1) resiliency against stragglers that may prolong
computations; (2) security against Byzantine (or malicious) workers that
deliberately modify the computation for their benefit; and (3)
(information-theoretic) privacy of the dataset amidst possible collusion of
workers. LCC, which leverages the well-known Lagrange polynomial to create
computation redundancy in a novel coded form across workers, can be applied to
any computation scenario in which the function of interest is an arbitrary
multivariate polynomial of the input dataset, hence covering many computations
of interest in machine learning. LCC significantly generalizes prior works to
go beyond linear computations. It also enables secure and private computing in
distributed settings, improving the computation and communication efficiency of
the state-of-the-art. Furthermore, we prove the optimality of LCC by showing
that it achieves the optimal tradeoff between resiliency, security, and
privacy, i.e., in terms of tolerating the maximum number of stragglers and
adversaries, and providing data privacy against the maximum number of colluding
workers. Finally, we show via experiments on Amazon EC2 that LCC speeds up the
conventional uncoded implementation of distributed least-squares linear
regression by up to , and also achieves a
- speedup over the state-of-the-art straggler
mitigation strategies
Assessment of a common nonlinear eddy-viscosity turbulence model in capturing laminarization in mixed convection flows
Laminarization is an important topic in heat transfer and turbulence modeling.
Recent studies have demonstrated that several well-known turbulence models
failed to provide accurate prediction when applied to mixed convection flows
with significant re-laminarization effects. One of those models, a well-validated
cubic nonlinear eddy-viscosity model, was observed to miss this feature entirely.
This paper studies the reasons behind this failure by providing a detailed
comparison with the baseline Launder–Sharma model. The difference is attributed
to the method of near-wall damping. A range of tests have been conducted and
two noteworthy findings are reported for the case of flow re-laminarization
Topological quantum memory
We analyze surface codes, the topological quantum error-correcting codes
introduced by Kitaev. In these codes, qubits are arranged in a two-dimensional
array on a surface of nontrivial topology, and encoded quantum operations are
associated with nontrivial homology cycles of the surface. We formulate
protocols for error recovery, and study the efficacy of these protocols. An
order-disorder phase transition occurs in this system at a nonzero critical
value of the error rate; if the error rate is below the critical value (the
accuracy threshold), encoded information can be protected arbitrarily well in
the limit of a large code block. This phase transition can be accurately
modeled by a three-dimensional Z_2 lattice gauge theory with quenched disorder.
We estimate the accuracy threshold, assuming that all quantum gates are local,
that qubits can be measured rapidly, and that polynomial-size classical
computations can be executed instantaneously. We also devise a robust recovery
procedure that does not require measurement or fast classical processing;
however for this procedure the quantum gates are local only if the qubits are
arranged in four or more spatial dimensions. We discuss procedures for
encoding, measurement, and performing fault-tolerant universal quantum
computation with surface codes, and argue that these codes provide a promising
framework for quantum computing architectures.Comment: 39 pages, 21 figures, REVTe
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
- …