667 research outputs found
Global Cardinality Constraints Make Approximating Some Max-2-CSPs Harder
Assuming the Unique Games Conjecture, we show that existing approximation algorithms for some Boolean Max-2-CSPs with cardinality constraints are optimal. In particular, we prove that Max-Cut with cardinality constraints is UG-hard to approximate within ~~0.858, and that Max-2-Sat with cardinality constraints is UG-hard to approximate within ~~0.929. In both cases, the previous best hardness results were the same as the hardness of the corresponding unconstrained Max-2-CSP (~~0.878 for Max-Cut, and ~~0.940 for Max-2-Sat).
The hardness for Max-2-Sat applies to monotone Max-2-Sat instances, meaning that we also obtain tight inapproximability for the Max-k-Vertex-Cover problem
On-Line File Caching
In the on-line file-caching problem problem, the input is a sequence of
requests for files, given on-line (one at a time). Each file has a non-negative
size and a non-negative retrieval cost. The problem is to decide which files to
keep in a fixed-size cache so as to minimize the sum of the retrieval costs for
files that are not in the cache when requested. The problem arises in web
caching by browsers and by proxies. This paper describes a natural
generalization of LRU called Landlord and gives an analysis showing that it has
an optimal performance guarantee (among deterministic on-line algorithms).
The paper also gives an analysis of the algorithm in a so-called ``loosely''
competitive model, showing that on a ``typical'' cache size, either the
performance guarantee is O(1) or the total retrieval cost is insignificant.Comment: ACM-SIAM Symposium on Discrete Algorithms (1998
Corporate influence and the academic computer science discipline. [4: CMU]
Prosopographical work on the four major centers for computer
research in the United States has now been conducted, resulting in big
questions about the independence of, so called, computer science
Annual Report 2001
This 2001 Annual Report records the achievements, outreach activities, and student honors work of the Eastern Illinois University\u27s Lumpkin College of Business and Applied Sciences. It also includes reports from the School of Business, the School of Family and Consumer Science, the School of Technology, and the department of Military Science.https://thekeep.eiu.edu/lumpkin_annualreports/1026/thumbnail.jp
How to Validate a Verification?
This paper introduces \textsl{signature validation}, a primitive allowing any \underline{t}hird party (\underline{T}héodore) to verify that a \underline{v}erifier (\underline{V}adim) computationally verified a signature on a message issued by a \underline{s}igner (\underline{S}arah).
A naive solution consists in sending by Sarah where is Sarah\u27s signature on and have Vadim confirm reception by a signature on .
Unfortunately, this only attests \textsl{proper reception} by Vadim, i.e. that Vadim \textsl{could have checked} and not that Vadim \textsl{actually verified} . By ``actually verifying\u27\u27 we mean providing a proof or a convincing argument that a program running on Vadim\u27s machine checked the correctness of .
This paper proposes several solutions for doing so, thereby providing a useful building-block in numerous commercial and legal interactions for proving informed consent
Communication with Partial Noiseless Feedback
We introduce the notion of one-way communication schemes with partial noiseless feedback. In this setting, Alice wishes to communicate a message to Bob by using a communication scheme that involves sending a sequence of bits over a channel while receiving feedback bits from Bob for delta fraction of the transmissions. An adversary is allowed to corrupt up to a constant fraction of Alice\u27s transmissions, while the feedback is always uncorrupted. Motivated by questions related to coding for interactive communication, we seek to determine the maximum error rate, as a function of 0 <= delta <= 1, such that Alice can send a message to Bob via some protocol with delta fraction of noiseless feedback. The case delta = 1 corresponds to full feedback, in which the result of Berlekamp [\u2764] implies that the maximum tolerable error rate is 1/3, while the case delta = 0 corresponds to no feedback, in which the maximum tolerable error rate is 1/4, achievable by use of a binary error-correcting code.
In this work, we show that for any delta in (0,1] and gamma in [0, 1/3), there exists a randomized communication scheme with noiseless delta-feedback, such that the probability of miscommunication is low, as long as no more than a gamma fraction of the rounds are corrupted. Moreover, we show that for any delta in (0, 1] and gamma < f(delta), there exists a deterministic communication scheme with noiseless delta-feedback that always decodes correctly as long as no more than a gamma fraction of rounds are corrupted. Here f is a monotonically increasing, piecewise linear, continuous function with f(0) = 1/4 and f(1) = 1/3. Also, the rate of communication in both cases is constant (dependent on delta and gamma but independent of the input length)
Annual Report Of Research and Creative Productions by Faculty and Staff from January to December, 2004.
Annual Report Of Research and Creative Productions by Faculty and Staff from January to December, 2004
On Regularity Lemma and Barriers in Streaming and Dynamic Matching
We present a new approach for finding matchings in dense graphs by building
on Szemer\'edi's celebrated Regularity Lemma. This allows us to obtain
non-trivial albeit slight improvements over longstanding bounds for matchings
in streaming and dynamic graphs. In particular, we establish the following
results for -vertex graphs:
* A deterministic single-pass streaming algorithm that finds
a -approximate matching in bits of space. This constitutes
the first single-pass algorithm for this problem in sublinear space that
improves over the -approximation of the greedy algorithm.
* A randomized fully dynamic algorithm that with high probability maintains a
-approximate matching in worst-case update time per each edge
insertion or deletion. The algorithm works even against an adaptive adversary.
This is the first update-time dynamic algorithm with approximation
guarantee arbitrarily close to one.
Given the use of regularity lemma, the improvement obtained by our algorithms
over trivial bounds is only by some factor.
Nevertheless, in each case, they show that the ``right'' answer to the problem
is not what is dictated by the previous bounds.
Finally, in the streaming model, we also present a randomized
-approximation algorithm whose space can be upper bounded by the
density of certain Ruzsa-Szemer\'edi (RS) graphs. While RS graphs by now have
been used extensively to prove streaming lower bounds, ours is the first to use
them as an upper bound tool for designing improved streaming algorithms
- …