710,023 research outputs found

    Round Elimination in Exact Communication Complexity

    Get PDF
    We study two basic graph parameters, the chromatic number and the orthogonal rank, in the context of classical and quantum exact communication complexity. In particular, we consider two types of communication problems that we call promise equality and list problems. For both of these, it was already known that the one-round classical and one-round quantum complexities are characterized by the chromatic number and orthogonal rank of a certain graph, respectively. In a promise equality problem, Alice and Bob must decide if their inputs are equal or not. We prove that classical protocols for such problems can always be reduced to one-round protocols with no extra communication. In contrast, we give an explicit instance of a promise problem that exhibits an exponential gap between the one- and two-round exact quantum communication complexities. Whereas the chromatic number thus captures the complete complexity of promise equality problems, the hierarchy of "quantum chromatic numbers" (starting with the orthogonal rank) giving the quantum communication complexity for every fixed number of communication rounds thus turns out to enjoy a much richer structure. In a list problem, Bob gets a subset of some finite universe, Alice gets an element from Bob\u27s subset, and their goal is for Bob to learn which element Alice was given. The best general lower bound (due to Orlitsky) and upper bound (due to Naor, Orlitsky, and Shor) on the classical communication complexity of such problems differ only by a constant factor. We exhibit an example showing that, somewhat surprisingly, the four-round protocol used in the bound of Naor et al. can in fact be optimal. Finally, we pose a conjecture on the orthogonality rank of a certain graph whose truth would imply an intriguing impossibility of round elimination in quantum protocols for list problems, something that works trivially in the classical case

    List Defective Colorings: Distributed Algorithms and Applications

    Full text link
    The distributed coloring problem is at the core of the area of distributed graph algorithms and it is a problem that has seen tremendous progress over the last few years. Much of the remarkable recent progress on deterministic distributed coloring algorithms is based on two main tools: a) defective colorings in which every node of a given color can have a limited number of neighbors of the same color and b) list coloring, a natural generalization of the standard coloring problem that naturally appears when colorings are computed in different stages and one has to extend a previously computed partial coloring to a full coloring. In this paper, we introduce \emph{list defective colorings}, which can be seen as a generalization of these two coloring variants. Essentially, in a list defective coloring instance, each node vv is given a list of colors xv,1,,xv,px_{v,1},\dots,x_{v,p} together with a list of defects dv,1,,dv,pd_{v,1},\dots,d_{v,p} such that if vv is colored with color xv,ix_{v, i}, it is allowed to have at most dv,id_{v, i} neighbors with color xv,ix_{v, i}. We highlight the important role of list defective colorings by showing that faster list defective coloring algorithms would directly lead to faster deterministic (Δ+1)(\Delta+1)-coloring algorithms in the LOCAL model. Further, we extend a recent distributed list coloring algorithm by Maus and Tonoyan [DISC '20]. Slightly simplified, we show that if for each node vv it holds that i=1p(dv,i+1)2>degG2(v)polylogΔ\sum_{i=1}^p \big(d_{v,i}+1)^2 > \mathrm{deg}_G^2(v)\cdot polylog\Delta then this list defective coloring instance can be solved in a communication-efficient way in only O(logΔ)O(\log\Delta) communication rounds. This leads to the first deterministic (Δ+1)(\Delta+1)-coloring algorithm in the standard CONGEST model with a time complexity of O(ΔpolylogΔ+logn)O(\sqrt{\Delta}\cdot polylog \Delta+\log^* n), matching the best time complexity in the LOCAL model up to a polylogΔpolylog\Delta factor

    New Generic Constructions of Error-Correcting PIR and Efficient Instantiations

    Get PDF
    A bb-error-correcting mm-server Private Information Retrieval (PIR) protocol enables a client to privately retrieve a data item of a database from mm servers even in the presence of bb malicious servers. List-decodable PIR is a generalization of error-correcting PIR to achieve a smaller number of servers at the cost of giving up unique decoding. Previous constructions of error-correcting and list-decodable PIR have exponential computational complexity in mm or cannot achieve sub-polynomial communication complexity no(1)n^{o(1)}, where nn is the database size. Recently, Zhang, Wang and Wang (ASIACCS 2022) presented a non-explicit construction of error-correcting PIR with no(1)n^{o(1)} communication and polynomial computational overhead in mm. However, their protocol requires the number of servers to be larger than the minimum one m=2b+1m=2b+1 and they left it as an open problem to reduce it. As for list-decodable PIR, there is no construction with no(1)n^{o(1)} communication. In this paper, we propose new generic constructions of error-correcting and list-decodable PIR from any one-round regular PIR. Our constructions increase computational complexity only by a polynomial factor in mm while the previous generic constructions incur (mb)\binom{m}{b} multiplicative overheads. Instantiated with the best-known protocols, our construction provides for the first time an explicit error-correcting PIR protocol with no(1)n^{o(1)} communication, which reduces the number of servers of the protocol by Zhang, Wang and Wang (ASIACCS 2022). For sufficiently large bb, we also show the existence of bb-error-correcting PIR with no(1)n^{o(1)} communication achieving the minimum number of servers, by allowing for two rounds of interaction. Furthermore, we show an extension to list-decodable PIR and obtain for the first time a protocol with no(1)n^{o(1)} communication. Other instantiations improve the communication complexity of the state-of-the-art tt-private protocols in which tt servers may collude. Along the way, we formalize the notion of \textit{locally surjective map families}, which generalize perfect hash families and may be of independent interest

    Secure Merge in Linear Time and O(log log N) Rounds

    Get PDF
    Secure merge considers the problem of combining two sorted lists (which are either held separately by two parties, or held by two parties in some privacy-preserving manner, e.g. via secret-sharing), and outputting a single merged (sorted) list in a privacy-preserving manner (typically the final list is encrypted or secret-shared amongst the original two parties). Just as algorithms for \textit{insecure} merge are faster than comparison-based sorting (Θ(n)\Theta(n) versus Θ(nlogn)\Theta(n \log n) for lists of size nn), we explore protocols for performing a \textit{secure} merge that are more performant than simply invoking a secure sort protocol. Namely, we construct a semi-honest protocol that requires O(n)O(n) communication and computation and O(loglogn)O(\log \log n) rounds of communication. This matches the metrics of the insecure merge for communication and computation, although it does not match the O(1)O(1) round-complexity of insecure merge. Our protocol relies only on black-box use of basic secure primitives, like secure comparison and shuffle. Our protocol improves on previous work of [FNO22], which gave a O(n)O(n) communication and O(n)O(n) round complexity protocol, and other ``naive\u27\u27 approaches, such as the shuffle-sort paradigm, which has O(nlogn)O(n \log n) communication and O(logn)O(\log n) round complexity. It is also more efficient for most practical applications than either a garbled circuit or fully-homomorphic encryption (FHE) approach, which each require O(nlogn)O(n \log n) communication or computation and have O(1)O(1) round complexity. There are several applications that stand to benefit from our result, including secure sort (in cases where two or more parties have access to their own list of data, secure sort reduces to secure merge since the parties can first sort their own data locally), which in-turn has implications for more efficient private set intersection (PSI) protocols; as well as secure mutable database storage and search, whereby secure merge can be used to insert new rows into an existing database. In building our secure merge protocol, we develop several subprotocols that may be of independent interest. For example, we develop a protocol for secure asymmetric merge (where one list is much larger than the other), which matches theoretic lower-bounds for all three metrics (assuming the ratio of list sizes is small enough)

    On the Complexity of Recovering Incidence Matrices

    Get PDF
    The incidence matrix of a graph is a fundamental object naturally appearing in many applications, involving graphs such as social networks, communication networks, or transportation networks. Often, the data collected about the incidence relations can have some slight noise. In this paper, we initiate the study of the computational complexity of recovering incidence matrices of graphs from a binary matrix: given a binary matrix M which can be written as the superposition of two binary matrices L and S, where S is the incidence matrix of a graph from a specified graph class, and L is a matrix (i) of small rank or, (ii) of small (Hamming) weight. Further, identify all those graphs whose incidence matrices form part of such a superposition. Here, L represents the noise in the input matrix M. Another motivation for this problem comes from the Matroid Minors project of Geelen, Gerards and Whittle, where perturbed graphic and co-graphic matroids play a prominent role. There, it is expected that a perturbed binary matroid (or its dual) is presented as L+S where L is a low rank matrix and S is the incidence matrix of a graph. Here, we address the complexity of constructing such a decomposition. When L is of small rank, we show that the problem is NP-complete, but it can be decided in time (mn)^O(r), where m,n are dimensions of M and r is an upper-bound on the rank of L. When L is of small weight, then the problem is solvable in polynomial time (mn)^O(1). Furthermore, in many applications it is desirable to have the list of all possible solutions for further analysis. We show that our algorithms naturally extend to enumeration algorithms for the above two problems with delay (mn)^O(r) and (mn)^O(1), respectively, between consecutive outputs

    A short list of Equalities induces large sign-rank

    Get PDF
    We exhibit a natural function Fn on n variables that can be computed by just a linear-size decision list of "Equalities," but whose sign-rank is 2Ω (n1/4). This yields the following two new unconditional complexity class separations. 1. Boolean circuit complexity. The function Fn can be computed by linear-size depth-two threshold formulas when the weights of the threshold gates are unrestricted (THR ∘ THR), but any THR ∘ MAJ circuit (the weights of the bottom threshold gates are polynomially bounded in n) computing Fn requires size 2Ω (n1/4). This provides the first separation between the Boolean circuit complexity classes THR ∘ MAJ and THR ∘ THR. While Amano and Maruoka [Proceedings of the 30th International Symposium on Mathematical Foundations of Computer Science, 2005, pp. 107-118] and Hansen and Podolskii [Proceedings of the 25th Annual IEEE Conference on Computational Complexity, 2010, pp. 270-279] emphasized that superpolynomial separations between the two classes remained a basic open problem, our separation is in fact exponential. In contrast, Goldmann, Håstad, and Razborov [Comput. Complexity, 2 (1992), pp. 277-300] showed more than twenty-five years ago that functions efficiently computable by MAJ ∘ THR circuits can also be efficiently computed by MAJ ∘ MAJ circuits. In view of this, it was not even clear if THR ∘ THR was significantly more powerful than THR ∘ MAJ until our work, and there was no candidate function identified for the potential separation. 2. Communication complexity. The function Fn (under the natural partition of the inputs) lies in the communication complexity class PMA. Since Fn has large sign-rank, this implies PMA ⊈ UPP, strongly resolving a recent open problem posed by Göös, Pitassi, and Watson [Comput. Complexity, 27 (2018), pp. 245-304]. In order to prove our main result, we view Fn as an XOR function and develop a technique to lower bound the sign-rank of such functions. This requires novel approximation-theoretic arguments against polynomials of unrestricted degree. Further, our work highlights for the first time the class "decision lists of exact thresholds" as a common frontier for making progress on longstanding open problems in threshold circuits and communication complexity

    Distributed Reconfiguration of Maximal Independent Sets

    Get PDF
    In this paper, we investigate a distributed maximal independent set (MIS) reconfiguration problem, in which there are two maximal independent sets for which every node is given its membership status, and the nodes need to communicate with their neighbors in order to find a reconfiguration schedule that switches from the first MIS to the second. Such a schedule is a list of independent sets that is restricted by forbidding two neighbors to change their membership status at the same step. In addition, these independent sets should provide some covering guarantee. We show that obtaining an actual MIS (and even a 3-dominating set) in each intermediate step is impossible. However, we provide efficient solutions when the intermediate sets are only required to be independent and 4-dominating, which is almost always possible, as we fully characterize. Consequently, our goal is to pin down the tradeoff between the possible length of the schedule and the number of communication rounds. We prove that a constant length schedule can be found in O(MIS+R32) rounds, where MIS is the complexity of finding an MIS in a worst-case graph and R32 is the complexity of finding a (3,2)-ruling set. For bounded degree graphs, this is O(log^*n) rounds and we show that it is necessary. On the other extreme, we show that with a constant number of rounds we can find a linear length schedule

    An Iterative Soft Decision Based LR-Aided MIMO Detector

    Get PDF
    The demand for wireless and high-rate communication system is increasing gradually and multiple-input-multiple-output (MIMO) is one of the feasible solutions to accommodate the growing demand for its spatial multiplexing and diversity gain. However, with high number of antennas, the computational and hardware complexity of MIMO increases exponentially. This accumulating complexity is a paramount problem in MIMO detection system directly leading to large power consumption. Hence, the major focus of this dissertation is algorithmic and hardware development of MIMO decoder with reduced complexity for both real and complex domain, which can be a beneficial solution with power efficiency and high throughput. Both hard and soft domain MIMO detectors are considered. The use of lattice reduction (LR) algorithm and on-demand-child-expansion for the reduction of noise propagation and node calculation respectively are the two of the key features of our developed architecture, presented in this literature. The real domain iterative soft MIMO decoding algorithm, simulated for 4 × 4 MIMO with different modulation scheme, achieves 1.1 to 2.7 dB improvement over Lease Sphere Decoder (LSD) and more than 8x reduction in list size, K as well as complexity of the detector. Next, the iterative real domain K-Best decoder is expanded to the complex domain with new detection scheme. It attains 6.9 to 8.0 dB improvement over real domain K-Best decoder and 1.4 to 2.5 dB better performance over conventional complex decoder for 8 × 8 MIMO with 64 QAM modulation scheme. Besides K, a new adjustable parameter, Rlimit has been introduced in order to append re-configurability trading-off between complexity and performance. After that, a novel low-power hardware architecture of complex decoder is developed for 8 × 8 MIMO and 64 QAM modulation scheme. The total word length of only 16 bits has been adopted limiting the bit error rate (BER) degradation to 0.3 dB with K and Rlimit equal to 4. The proposed VLSI architecture is modeled in Verilog HDL using Xilinx and synthesized using Synopsys Design Vision in 45 nm CMOS technology. According to the synthesize result, it achieves 1090.8 Mbps throughput with power consumption of 580 mW and latency of 0.33 us. The maximum frequency the design proposed is 181.8 MHz. All of the proposed decoders mentioned above are bounded by the fixed K. Hence, an adaptive real domain K-Best decoder is further developed to achieve the similar performance with less K, thereby reducing the computational complexity of the decoder. It does not require accurate SNR measurement to perform the initial estimation of list size, K. Instead, the difference between the first two minimal distances is considered, which inherently eliminates complexity. In summary, a novel iterative K-Best detector for both real and complex domain with efficient VLSI design is proposed in this dissertation. The results from extensive simulation and VHDL with analysis using Synopsys tool are also presented for justification and validation of the proposed works
    corecore