35,386 research outputs found
Some Applications of Coding Theory in Computational Complexity
Error-correcting codes and related combinatorial constructs play an important
role in several recent (and old) results in computational complexity theory. In
this paper we survey results on locally-testable and locally-decodable
error-correcting codes, and their applications to complexity theory and to
cryptography.
Locally decodable codes are error-correcting codes with sub-linear time
error-correcting algorithms. They are related to private information retrieval
(a type of cryptographic protocol), and they are used in average-case
complexity and to construct ``hard-core predicates'' for one-way permutations.
Locally testable codes are error-correcting codes with sub-linear time
error-detection algorithms, and they are the combinatorial core of
probabilistically checkable proofs
Decentralized provenance-aware publishing with nanopublications
Publication and archival of scientific results is still commonly considered the responsability of classical publishing companies. Classical forms of publishing, however, which center around printed narrative articles, no longer seem well-suited in the digital age. In particular, there exist currently no efficient, reliable, and agreed-upon methods for publishing scientific datasets, which have become increasingly important for science. In this article, we propose to design scientific data publishing as a web-based bottom-up process, without top-down control of central authorities such as publishing companies. Based on a novel combination of existing concepts and technologies, we present a server network to decentrally store and archive data in the form of nanopublications, an RDF-based format to represent scientific data. We show how this approach allows researchers to publish, retrieve, verify, and recombine datasets of nanopublications in a reliable and trustworthy manner, and we argue that this architecture could be used as a low-level data publication layer to serve the Semantic Web in general. Our evaluation of the current network shows that this system is efficient and reliable
Phase retrieval from very few measurements
In many applications, signals are measured according to a linear process, but
the phases of these measurements are often unreliable or not available. To
reconstruct the signal, one must perform a process known as phase retrieval.
This paper focuses on completely determining signals with as few intensity
measurements as possible, and on efficient phase retrieval algorithms from such
measurements. For the case of complex M-dimensional signals, we construct a
measurement ensemble of size 4M-4 which yields injective intensity
measurements; this is conjectured to be the smallest such ensemble. For the
case of real signals, we devise a theory of "almost" injective intensity
measurements, and we characterize such ensembles. Later, we show that phase
retrieval from M+1 almost injective intensity measurements is NP-hard,
indicating that computationally efficient phase retrieval must come at the
price of measurement redundancy.Comment: 18 pages, 1 figur
Efficient Approximation Algorithms for Multi-Antennae Largest Weight Data Retrieval
In a mobile network, wireless data broadcast over channels (frequencies)
is a powerful means for distributed dissemination of data to clients who access
the channels through multi-antennae equipped on their mobile devices. The
-antennae largest weight data retrieval (ALWDR) problem is to
compute a schedule for downloading a subset of data items that has a maximum
total weight using antennae in a given time interval. In this paper,
we propose a ratio approximation algorithm for the
-antennae largest weight data retrieval (ALWDR) problem that
has the same ratio as the known result but a significantly improved time
complexity of from
when
\cite{lu2014data}. To our knowledge, our algorithm represents the first ratio
approximation solution to ALWDR for the
general case of arbitrary . To achieve this, we first give a ratio
algorithm for the -separated ALWDR
(ALWDR) with runtime , under the assumption
that every data item appears at most once in each segment of
ALWDR, for any input of maximum length on channels in
time slots. Then, we show that we can retain the same ratio for
ALWDR without this assumption at the cost of increased time
complexity to . This result immediately yields an
approximation solution of same ratio and time complexity for ALWDR,
presenting a significant improvement of the known time complexity of ratio
approximation to the problem
Web Service Retrieval by Structured Models
Much of the information available on theWorldWideWeb cannot effectively be found by the help of search engines because the information is dynamically generated on a userās request.This applies to online decision support services as well as Deep Web information. We present in this paper a retrieval system that uses a variant of structured modeling to describe such information services, and similarity of models for retrieval. The computational complexity of the similarity problem is discussed, and graph algorithms for retrieval on repositories of service descriptions are introduced. We show how bounds for combinatorial optimization problems can provide filter algorithms in a retrieval context. We report about an evaluation of the retrieval system in a classroom experiment and give computational results on a benchmark library.Economics ;
- ā¦