18 research outputs found

    Improved Hardness of Approximating k-Clique under ETH

    Full text link
    In this paper, we prove that assuming the exponential time hypothesis (ETH), there is no f(k)nko(1/loglogk)f(k)\cdot n^{k^{o(1/\log\log k)}}-time algorithm that can decide whether an nn-vertex graph contains a clique of size kk or contains no clique of size k/2k/2, and no FPT algorithm can decide whether an input graph has a clique of size kk or no clique of size k/f(k)k/f(k), where f(k)f(k) is some function in k1o(1)k^{1-o(1)}. Our results significantly improve the previous works [Lin21, LRSW22]. The crux of our proof is a framework to construct gap-producing reductions for the kk-Clique problem. More precisely, we show that given an error-correcting code C:Σ1kΣ2kC:\Sigma_1^k\to\Sigma_2^{k'} that is locally testable and smooth locally decodable in the parallel setting, one can construct a reduction which on input a graph GG outputs a graph GG' in (k)O(1)nO(logΣ2/logΣ1)(k')^{O(1)}\cdot n^{O(\log|\Sigma_2|/\log|\Sigma_1|)} time such that: \bullet If GG has a clique of size kk, then GG' has a clique of size KK, where K=(k)O(1)K = (k')^{O(1)}. \bullet If GG has no clique of size kk, then GG' has no clique of size (1ε)K(1-\varepsilon)\cdot K for some constant ε(0,1)\varepsilon\in(0,1). We then construct such a code with k=kΘ(loglogk)k'=k^{\Theta(\log\log k)} and Σ2=Σ1k0.54|\Sigma_2|=|\Sigma_1|^{k^{0.54}}, establishing the hardness results above. Our code generalizes the derivative code [WY07] into the case with a super constant order of derivatives.Comment: 48 page

    Automated Tail Bound Analysis for Probabilistic Recurrence Relations

    Full text link
    Probabilistic recurrence relations (PRRs) are a standard formalism for describing the runtime of a randomized algorithm. Given a PRR and a time limit κ\kappa, we consider the classical concept of tail probability Pr[Tκ]\Pr[T \ge \kappa], i.e., the probability that the randomized runtime TT of the PRR exceeds the time limit κ\kappa. Our focus is the formal analysis of tail bounds that aims at finding a tight asymptotic upper bound uPr[Tκ]u \geq \Pr[T\ge\kappa] in the time limit κ\kappa. To address this problem, the classical and most well-known approach is the cookbook method by Karp (JACM 1994), while other approaches are mostly limited to deriving tail bounds of specific PRRs via involved custom analysis. In this work, we propose a novel approach for deriving exponentially-decreasing tail bounds (a common type of tail bounds) for PRRs whose preprocessing time and random passed sizes observe discrete or (piecewise) uniform distribution and whose recursive call is either a single procedure call or a divide-and-conquer. We first establish a theoretical approach via Markov's inequality, and then instantiate the theoretical approach with a template-based algorithmic approach via a refined treatment of exponentiation. Experimental evaluation shows that our algorithmic approach is capable of deriving tail bounds that are (i) asymptotically tighter than Karp's method, (ii) match the best-known manually-derived asymptotic tail bound for QuickSelect, and (iii) is only slightly worse (with a loglogn\log\log n factor) than the manually-proven optimal asymptotic tail bound for QuickSort. Moreover, our algorithmic approach handles all examples (including realistic PRRs such as QuickSort, QuickSelect, DiameterComputation, etc.) in less than 0.1 seconds, showing that our approach is efficient in practice.Comment: 46 pages, 15 figure

    Parameterized Inapproximability Hypothesis under ETH

    Full text link
    The Parameterized Inapproximability Hypothesis (PIH) asserts that no fixed parameter tractable (FPT) algorithm can distinguish a satisfiable CSP instance, parameterized by the number of variables, from one where every assignment fails to satisfy an ε\varepsilon fraction of constraints for some absolute constant ε>0\varepsilon > 0. PIH plays the role of the PCP theorem in parameterized complexity. However, PIH has only been established under Gap-ETH, a very strong assumption with an inherent gap. In this work, we prove PIH under the Exponential Time Hypothesis (ETH). This is the first proof of PIH from a gap-free assumption. Our proof is self-contained and elementary. We identify an ETH-hard CSP whose variables take vector values, and constraints are either linear or of a special parallel structure. Both kinds of constraints can be checked with constant soundness via a "parallel PCP of proximity" based on the Walsh-Hadamard code

    TreeGen: A Tree-Based Transformer Architecture for Code Generation

    Full text link
    A code generation system generates programming language code based on an input natural language description. State-of-the-art approaches rely on neural networks for code generation. However, these code generators suffer from two problems. One is the long dependency problem, where a code element often depends on another far-away code element. A variable reference, for example, depends on its definition, which may appear quite a few lines before. The other problem is structure modeling, as programs contain rich structural information. In this paper, we propose a novel tree-based neural architecture, TreeGen, for code generation. TreeGen uses the attention mechanism of Transformers to alleviate the long-dependency problem, and introduces a novel AST reader (encoder) to incorporate grammar rules and AST structures into the network. We evaluated TreeGen on a Python benchmark, HearthStone, and two semantic parsing benchmarks, ATIS and GEO. TreeGen outperformed the previous state-of-the-art approach by 4.5 percentage points on HearthStone, and achieved the best accuracy among neural network-based approaches on ATIS (89.1%) and GEO (89.6%). We also conducted an ablation test to better understand each component of our model

    Microwave-based preparation and characterization of Fe-cored carbon nanocapsules with novel stability and super electromagnetic wave absorption performance

    Get PDF
    Microwave-metal discharge was proposed as a facile methodology to prepare unique Fe-cored carbon nanocapsules (Fe@CNCs) with high purity, novel stability and extraordinary electromagnetic wave (EMW) absorption performance. The effect of microwave power, irradiation time and cyclohexane/ferrocene ratio on the production of Fe@CNCs was examined and the properties of the nanocapsules, such as their Fe content, phase, yield, degree of graphitization and associated microstructures were investigated in detail. It was found that the prepared Fe@CNCs, which can easily be separated from the reaction system, displayed exceedingly high electromagnetic wave (EMW) absorption performance over the 2–18 GHz range. At the minimal reflection loss (RL) values over −10 dB, the EMW absorption bandwidth can reach up to 13.8 GHz with an absorber thickness of 1.5–5 mm. In addition, novel thermo-oxidative stability and super anti-corrosion property were also obtained for the Fe@CNCs as no signs of any corrosion or oxidative degradation loss were observed from the accelerated degradation tests in air and acid at temperatures up to 420 °C. The exceedingly high EMW absorption performance coupled with the superior anti-degradation and anti-corrosion properties of the prepared nanocomposite microcapsules highlights the novel capability of microwave-metal discharge in synthesizing advanced metal-cored nanocarbon microcapsules with promising application potentials in diverse fields, such as but not limited to microwave absorption, EM shielding and advanced separations etc

    Correlation Analysis Between Required Surgical Indexes and Complications in Patients With Coronary Heart Disease

    Get PDF
    A total of 215 patients with coronary heart disease (CHD) were analyzed with SPSS. Samples of different genders showed significance in the obtuse marginal branch of the left circumflex branch × 1, the diagonal branch D1 × 1, and the ms PV representation. Patients with left circumflex branch occlusion are more male and tend to be younger. Age displayed a positive correlation with left intima-media thickness (IMT) and right IMT. This indicated that as age increases, the values of left IMT and right IMT increase. Samples of different CHD types showed significance in the obtuse marginal branch of the left circumflex branch × 1, the middle part of RCA × 1, and the middle part of the left anterior descending branch × 1.5. For non-ST-segment elevation angina pectoris with acute total vascular occlusion, the left circumflex artery is the most common, followed by the right coronary artery and anterior descending branch. Ultrasound of carotid IMT in patients with CHD can predict changes in left ventricular function, but no specific correlation between left and right common carotid IMT was found. Samples with or without the medical history of ASCVD showed significance in the branch number of coronary vessel lesions. The value of the branch number of coronary vessel lesions in patients with atherosclerotic cardiovascular disease (ASCVD) was higher than in those without ASCVD. The occurrence of complication is significantly relative with the distance of left circumflex branch × 1, the middle segment of left anterior descending branch × 1.5, and the distance of left anterior descending branch × 1. For patients without complications, the values in the distal left circumflex branch × 1, the middle left anterior descending branch × 1.5, and the distal left anterior descending branch × 1 were higher than those for patients with complications. The VTE scores showed a positive correlation with the proximal part of RCA × 1, the branch number of coronary vessel lesions, the posterior descending branch of left circumflex branch × 1, the distal part of left circumflex branch × 1, and the middle part of left anterior descending branch × 1.5

    On Lower Bounds of Approximating Parameterized k-Clique

    Get PDF
    Given a simple graph G and an integer k, the goal of the k-Clique problem is to decide if G contains a complete subgraph of size k. We say an algorithm approximates k-Clique within a factor g(k) if it can find a clique of size at least k/g(k) when G is guaranteed to have a k-clique. Recently, it was shown that approximating k-Clique within a constant factor is W[1]-hard [Bingkai Lin, 2021]. We study the approximation of k-Clique under the Exponential Time Hypothesis (ETH). The reduction of [Bingkai Lin, 2021] already implies an n^?(?[6]{log k})-time lower bound under ETH. We improve this lower bound to n^?(log k). Using the gap-amplification technique by expander graphs, we also prove that there is no k^o(1) factor FPT-approximation algorithm for k-Clique under ETH. We also suggest a new way to prove the Parameterized Inapproximability Hypothesis (PIH) under ETH. We show that if there is no n^O(k/(log k))-time algorithm to approximate k-Clique within a constant factor, then PIH is true

    On Lower Bounds of Approximating Parameterized k-Clique

    Get PDF
    Given a simple graph G and an integer k, the goal of the k-Clique problem is to decide if G contains a complete subgraph of size k. We say an algorithm approximates k-Clique within a factor g(k) if it can find a clique of size at least k/g(k) when G is guaranteed to have a k-clique. Recently, it was shown that approximating k-Clique within a constant factor is W[1]-hard [Bingkai Lin, 2021]. We study the approximation of k-Clique under the Exponential Time Hypothesis (ETH). The reduction of [Bingkai Lin, 2021] already implies an n^?(?[6]{log k})-time lower bound under ETH. We improve this lower bound to n^?(log k). Using the gap-amplification technique by expander graphs, we also prove that there is no k^o(1) factor FPT-approximation algorithm for k-Clique under ETH. We also suggest a new way to prove the Parameterized Inapproximability Hypothesis (PIH) under ETH. We show that if there is no n^O(k/(log k))-time algorithm to approximate k-Clique within a constant factor, then PIH is true

    LNCS

    No full text
    Probabilistic recurrence relations (PRRs) are a standard formalism for describing the runtime of a randomized algorithm. Given a PRR and a time limit κ, we consider the tail probability Pr[T≥κ], i.e., the probability that the randomized runtime T of the PRR exceeds κ. Our focus is the formal analysis of tail bounds that aims at finding a tight asymptotic upper bound u≥Pr[T≥κ]. To address this problem, the classical and most well-known approach is the cookbook method by Karp (JACM 1994), while other approaches are mostly limited to deriving tail bounds of specific PRRs via involved custom analysis. In this work, we propose a novel approach for deriving the common exponentially-decreasing tail bounds for PRRs whose preprocessing time and random passed sizes observe discrete or (piecewise) uniform distribution and whose recursive call is either a single procedure call or a divide-and-conquer. We first establish a theoretical approach via Markov’s inequality, and then instantiate the theoretical approach with a template-based algorithmic approach via a refined treatment of exponentiation. Experimental evaluation shows that our algorithmic approach is capable of deriving tail bounds that are (i) asymptotically tighter than Karp’s method, (ii) match the best-known manually-derived asymptotic tail bound for QuickSelect, and (iii) is only slightly worse (with a loglogn factor) than the manually-proven optimal asymptotic tail bound for QuickSort. Moreover, our algorithmic approach handles all examples (including realistic PRRs such as QuickSort, QuickSelect, DiameterComputation, etc.) in less than 0.1 s, showing that our approach is efficient in practice
    corecore