31 research outputs found
Applications of Derandomization Theory in Coding
Randomized techniques play a fundamental role in theoretical computer science
and discrete mathematics, in particular for the design of efficient algorithms
and construction of combinatorial objects. The basic goal in derandomization
theory is to eliminate or reduce the need for randomness in such randomized
constructions. In this thesis, we explore some applications of the fundamental
notions in derandomization theory to problems outside the core of theoretical
computer science, and in particular, certain problems related to coding theory.
First, we consider the wiretap channel problem which involves a communication
system in which an intruder can eavesdrop a limited portion of the
transmissions, and construct efficient and information-theoretically optimal
communication protocols for this model. Then we consider the combinatorial
group testing problem. In this classical problem, one aims to determine a set
of defective items within a large population by asking a number of queries,
where each query reveals whether a defective item is present within a specified
group of items. We use randomness condensers to explicitly construct optimal,
or nearly optimal, group testing schemes for a setting where the query outcomes
can be highly unreliable, as well as the threshold model where a query returns
positive if the number of defectives pass a certain threshold. Finally, we
design ensembles of error-correcting codes that achieve the
information-theoretic capacity of a large class of communication channels, and
then use the obtained ensembles for construction of explicit capacity achieving
codes.
[This is a shortened version of the actual abstract in the thesis.]Comment: EPFL Phd Thesi
Uncertainty relations for multiple measurements with applications
Uncertainty relations express the fundamental incompatibility of certain
observables in quantum mechanics. Far from just being puzzling constraints on
our ability to know the state of a quantum system, uncertainty relations are at
the heart of why some classically impossible cryptographic primitives become
possible when quantum communication is allowed. This thesis is concerned with
strong notions of uncertainty relations and their applications in quantum
information theory.
One operational manifestation of such uncertainty relations is a purely
quantum effect referred to as information locking. A locking scheme can be
viewed as a cryptographic protocol in which a uniformly random n-bit message is
encoded in a quantum system using a classical key of size much smaller than n.
Without the key, no measurement of this quantum state can extract more than a
negligible amount of information about the message, in which case the message
is said to be "locked". Furthermore, knowing the key, it is possible to
recover, that is "unlock", the message. We give new efficient constructions of
bases satisfying strong uncertainty relations leading to the first explicit
construction of an information locking scheme. We also give several other
applications of our uncertainty relations both to cryptographic and
communication tasks.
In addition, we define objects called QC-extractors, that can be seen as
strong uncertainty relations that hold against quantum adversaries. We provide
several constructions of QC-extractors, and use them to prove the security of
cryptographic protocols for two-party computations based on the sole assumption
that the parties' storage device is limited in transmitting quantum
information. In doing so, we resolve a central question in the so-called
noisy-storage model by relating security to the quantum capacity of storage
devices.Comment: PhD Thesis, McGill University, School of Computer Science, 158 pages.
Contains arXiv:1010.3007 and arXiv:1111.2026 with some small addition
Restricted isometry constants in compressed sensing
Compressed Sensing (CS) is a framework where we measure data through a non-adaptive linear
mapping with far fewer measurements that the ambient dimension of the data. This is made
possible by the exploitation of the inherent structure (simplicity) in the data being measured.
The central issues in this framework is the design and analysis of the measurement operator
(matrix) and recovery algorithms. Restricted isometry constants (RIC) of the measurement
matrix are the most widely used tool for the analysis of CS recovery algorithms. The addition
of the subscripts 1 and 2 below reflects the two RIC variants developed in the CS literature,
they refer to the â„“1-norm and â„“2-norm respectively.
The RIC2 of a matrix A measures how close to an isometry is the action of A on vectors with
few nonzero entries, measured in the â„“2-norm. This, and related quantities, provide a mechanism
by which standard eigen-analysis can be applied to topics relying on sparsity. Specifically,
the upper and lower RIC2 of a matrix A of size n × N is the maximum and the minimum
deviation from unity (one) of the largest and smallest, respectively, square of singular values of
all (N/k)matrices formed by taking k columns from A. Calculation of the RIC2 is intractable for
most matrices due to its combinatorial nature; however, many random matrices typically have
bounded RIC2 in some range of problem sizes (k, n,N). We provide the best known bound
on the RIC2 for Gaussian matrices, which is also the smallest known bound on the RIC2 for
any large rectangular matrix. Our results are built on the prior bounds of Blanchard, Cartis,
and Tanner in Compressed Sensing: How sharp is the Restricted Isometry Property?, with
improvements achieved by grouping submatrices that share a substantial number of columns.
RIC2 bounds have been presented for a variety of random matrices, matrix dimensions and
sparsity ranges. We provide explicit formulae for RIC2 bounds, of n × N Gaussian matrices
with sparsity k, in three settings: a) n/N fixed and k/n approaching zero, b) k/n fixed and
n/N approaching zero, and c) n/N approaching zero with k/n decaying inverse logarithmically
in N/n; in these three settings the RICs a) decay to zero, b) become unbounded (or approach
inherent bounds), and c) approach a non-zero constant. Implications of these results for RIC2
based analysis of CS algorithms are presented.
The RIC2 of sparse mean zero random matrices can be bounded by using concentration
bounds of Gaussian matrices. However, this RIC2 approach does not capture the benefits of
the sparse matrices, and in so doing gives pessimistic bounds. RIC1 is a variant of RIC2 where
the nearness to an isometry is measured in the â„“1-norm, which is both able to better capture
the structure of sparse matrices and allows for the analysis of non-mean zero matrices.
We consider a probabilistic construction of sparse random matrices where each column has
a fixed number of non-zeros whose row indices are drawn uniformly at random. These matrices
have a one-to-one correspondence with the adjacency matrices of fixed left degree expander
graphs. We present formulae for the expected cardinality of the set of neighbours for these
graphs, and present a tail bound on the probability that this cardinality will be less than the
expected value. Deducible from this bound is a similar bound for the expansion of the graph
which is of interest in many applications. These bounds are derived through a more detailed
analysis of collisions in unions of sets using a dyadic splitting technique. This bound allows
for quantitative sampling theorems on existence of expander graphs and the sparse random
matrices we consider and also quantitative CS sampling theorems when using sparse non mean-zero
measurement matrices
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Advances in Computer Science and Engineering
The book Advances in Computer Science and Engineering constitutes the revised selection of 23 chapters written by scientists and researchers from all over the world. The chapters cover topics in the scientific fields of Applied Computing Techniques, Innovations in Mechanical Engineering, Electrical Engineering and Applications and Advances in Applied Modeling
ECOS 2012
The 8-volume set contains the Proceedings of the 25th ECOS 2012 International Conference, Perugia, Italy, June 26th to June 29th, 2012. ECOS is an acronym for Efficiency, Cost, Optimization and Simulation (of energy conversion systems and processes), summarizing the topics covered in ECOS: Thermodynamics, Heat and Mass Transfer, Exergy and Second Law Analysis, Process Integration and Heat Exchanger Networks, Fluid Dynamics and Power Plant Components, Fuel Cells, Simulation of Energy Conversion Systems, Renewable Energies, Thermo-Economic Analysis and Optimisation, Combustion, Chemical Reactors, Carbon Capture and Sequestration, Building/Urban/Complex Energy Systems, Water Desalination and Use of Water Resources, Energy Systems- Environmental and Sustainability Issues, System Operation/ Control/Diagnosis and Prognosis, Industrial Ecology
The Fifth Annual Thermal and Fluids Analysis Workshop
The Fifth Annual Thermal and Fluids Analysis Workshop was held at the Ohio Aerospace Institute, Brook Park, Ohio, cosponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, 16-20 Aug. 1993. The workshop consisted of classes, vendor demonstrations, and paper sessions. The classes and vendor demonstrations provided participants with the information on widely used tools for thermal and fluid analysis. The paper sessions provided a forum for the exchange of information and ideas among thermal and fluids analysts. Paper topics included advances and uses of established thermal and fluids computer codes (such as SINDA and TRASYS) as well as unique modeling techniques and applications