35,386 research outputs found

    Some Applications of Coding Theory in Computational Complexity

    Full text link
    Error-correcting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locally-testable and locally-decodable error-correcting codes, and their applications to complexity theory and to cryptography. Locally decodable codes are error-correcting codes with sub-linear time error-correcting algorithms. They are related to private information retrieval (a type of cryptographic protocol), and they are used in average-case complexity and to construct ``hard-core predicates'' for one-way permutations. Locally testable codes are error-correcting codes with sub-linear time error-detection algorithms, and they are the combinatorial core of probabilistically checkable proofs

    Decentralized provenance-aware publishing with nanopublications

    Get PDF
    Publication and archival of scientific results is still commonly considered the responsability of classical publishing companies. Classical forms of publishing, however, which center around printed narrative articles, no longer seem well-suited in the digital age. In particular, there exist currently no efficient, reliable, and agreed-upon methods for publishing scientific datasets, which have become increasingly important for science. In this article, we propose to design scientific data publishing as a web-based bottom-up process, without top-down control of central authorities such as publishing companies. Based on a novel combination of existing concepts and technologies, we present a server network to decentrally store and archive data in the form of nanopublications, an RDF-based format to represent scientific data. We show how this approach allows researchers to publish, retrieve, verify, and recombine datasets of nanopublications in a reliable and trustworthy manner, and we argue that this architecture could be used as a low-level data publication layer to serve the Semantic Web in general. Our evaluation of the current network shows that this system is efficient and reliable

    Phase retrieval from very few measurements

    Full text link
    In many applications, signals are measured according to a linear process, but the phases of these measurements are often unreliable or not available. To reconstruct the signal, one must perform a process known as phase retrieval. This paper focuses on completely determining signals with as few intensity measurements as possible, and on efficient phase retrieval algorithms from such measurements. For the case of complex M-dimensional signals, we construct a measurement ensemble of size 4M-4 which yields injective intensity measurements; this is conjectured to be the smallest such ensemble. For the case of real signals, we devise a theory of "almost" injective intensity measurements, and we characterize such ensembles. Later, we show that phase retrieval from M+1 almost injective intensity measurements is NP-hard, indicating that computationally efficient phase retrieval must come at the price of measurement redundancy.Comment: 18 pages, 1 figur

    Efficient Approximation Algorithms for Multi-Antennae Largest Weight Data Retrieval

    Full text link
    In a mobile network, wireless data broadcast over mm channels (frequencies) is a powerful means for distributed dissemination of data to clients who access the channels through multi-antennae equipped on their mobile devices. The Ī“\delta-antennae largest weight data retrieval (Ī“\deltaALWDR) problem is to compute a schedule for downloading a subset of data items that has a maximum total weight using Ī“\delta antennae in a given time interval. In this paper, we propose a ratio 1āˆ’1eāˆ’Ļµ1-\frac{1}{e}-\epsilon approximation algorithm for the Ī“\delta-antennae largest weight data retrieval (Ī“\deltaALWDR) problem that has the same ratio as the known result but a significantly improved time complexity of O(21Ļµ1Ļµm7T3.5L)O(2^{\frac{1}{\epsilon}}\frac{1}{\epsilon}m^{7}T^{3.5}L) from O(Ļµ3.5m3.5ĻµT3.5L)O(\epsilon^{3.5}m^{\frac{3.5}{\epsilon}}T^{3.5}L) when Ī“=1\delta=1 \cite{lu2014data}. To our knowledge, our algorithm represents the first ratio 1āˆ’1eāˆ’Ļµ1-\frac{1}{e}-\epsilon approximation solution to Ī“\deltaALWDR for the general case of arbitrary Ī“\delta. To achieve this, we first give a ratio 1āˆ’1e1-\frac{1}{e} algorithm for the Ī³\gamma-separated Ī“\deltaALWDR (Ī“\deltaAĪ³\gammaLWDR) with runtime O(m7T3.5L)O(m^{7}T^{3.5}L), under the assumption that every data item appears at most once in each segment of Ī“\deltaAĪ³\gammaLWDR, for any input of maximum length LL on mm channels in TT time slots. Then, we show that we can retain the same ratio for Ī“\deltaAĪ³\gammaLWDR without this assumption at the cost of increased time complexity to O(2Ī³m7T3.5L)O(2^{\gamma}m^{7}T^{3.5}L). This result immediately yields an approximation solution of same ratio and time complexity for Ī“\deltaALWDR, presenting a significant improvement of the known time complexity of ratio 1āˆ’1eāˆ’Ļµ1-\frac{1}{e}-\epsilon approximation to the problem

    Web Service Retrieval by Structured Models

    Get PDF
    Much of the information available on theWorldWideWeb cannot effectively be found by the help of search engines because the information is dynamically generated on a userā€™s request.This applies to online decision support services as well as Deep Web information. We present in this paper a retrieval system that uses a variant of structured modeling to describe such information services, and similarity of models for retrieval. The computational complexity of the similarity problem is discussed, and graph algorithms for retrieval on repositories of service descriptions are introduced. We show how bounds for combinatorial optimization problems can provide filter algorithms in a retrieval context. We report about an evaluation of the retrieval system in a classroom experiment and give computational results on a benchmark library.Economics ;
    • ā€¦
    corecore