761 research outputs found

    A general theory of DNA-mediated and other valence-limited interactions

    Full text link
    We present a general theory for predicting the interaction potentials between DNA-coated colloids, and more broadly, any particles that interact via valence-limited ligand-receptor binding. Our theory correctly incorporates the configurational and combinatorial entropic factors that play a key role in valence-limited interactions. By rigorously enforcing self-consistency, it achieves near-quantitative accuracy with respect to detailed Monte Carlo calculations. With suitable approximations and in particular geometries, our theory reduces to previous successful treatments, which are now united in a common and extensible framework. We expect our tools to be useful to other researchers investigating ligand-mediated interactions. A complete and well-documented Python implementation is freely available at http://github.com/patvarilly/DNACC .Comment: 18 pages, 10 figure

    On the Pseudo-Deterministic Query Complexity of NP Search Problems

    Get PDF
    We study pseudo-deterministic query complexity - randomized query algorithms that are required to output the same answer with high probability on all inputs. We prove Ω(√n) lower bounds on the pseudo-deterministic complexity of a large family of search problems based on unsatisfiable random CNF instances, and also for the promise problem (FIND1) of finding a 1 in a vector populated with at least half one’s. This gives an exponential separation between randomized query complexity and pseudo-deterministic complexity, which is tight in the quantum setting. As applications we partially solve a related combinatorial coloring problem, and we separate random tree-like Resolution from its pseudo-deterministic version. In contrast to our lower bound, we show, surprisingly, that in the zero-error, average case setting, the three notions (deterministic, randomized, pseudo-deterministic) collapse

    Subspace-Invariant AC0^0 Formulas

    Full text link
    We consider the action of a linear subspace UU of {0,1}n\{0,1\}^n on the set of AC0^0 formulas with inputs labeled by literals in the set {X1,X‾1,…,Xn,X‾n}\{X_1,\overline X_1,\dots,X_n,\overline X_n\}, where an element u∈Uu \in U acts on formulas by transposing the iith pair of literals for all i∈[n]i \in [n] such that ui=1u_i=1. A formula is {\em UU-invariant} if it is fixed by this action. For example, there is a well-known recursive construction of depth d+1d+1 formulas of size O(n⋅2dn1/d)O(n{\cdot}2^{dn^{1/d}}) computing the nn-variable PARITY function; these formulas are easily seen to be PP-invariant where PP is the subspace of even-weight elements of {0,1}n\{0,1\}^n. In this paper we establish a nearly matching 2d(n1/d−1)2^{d(n^{1/d}-1)} lower bound on the PP-invariant depth d+1d+1 formula size of PARITY. Quantitatively this improves the best known Ω(2184d(n1/d−1))\Omega(2^{\frac{1}{84}d(n^{1/d}-1)}) lower bound for {\em unrestricted} depth d+1d+1 formulas, while avoiding the use of the switching lemma. More generally, for any linear subspaces U⊂VU \subset V, we show that if a Boolean function is UU-invariant and non-constant over VV, then its UU-invariant depth d+1d+1 formula size is at least 2d(m1/d−1)2^{d(m^{1/d}-1)} where mm is the minimum Hamming weight of a vector in U⊥∖V⊥U^\bot \setminus V^\bot

    Criticality of Regular Formulas

    Get PDF

    On Disperser/Lifting Properties of the Index and Inner-Product Functions

    Get PDF
    Query-to-communication lifting theorems, which connect the query complexity of a Boolean function to the communication complexity of an associated "lifted" function obtained by composing the function with many copies of another function known as a gadget, have been instrumental in resolving many open questions in computational complexity. A number of important complexity questions could be resolved if we could make substantial improvements in the input size required for lifting with the Index function, which is a universal gadget for lifting, from its current near-linear size down to polylogarithmic in the number of inputs N of the original function or, ideally, constant. The near-linear size bound was recently shown by Lovett, Meka, Mertz, Pitassi and Zhang [Shachar Lovett et al., 2022] using a recent breakthrough improvement on the Sunflower Lemma to show that a certain graph associated with an Index function of that size is a disperser. They also stated a conjecture about the Index function that is essential for further improvements in the size required for lifting with Index using current techniques. In this paper we prove the following; - The conjecture of Lovett et al. is false when the size of the Index gadget is less than logarithmic in N. - The same limitation applies to the Inner-Product function. More precisely, the Inner-Product function, which is known to satisfy the disperser property at size O(log N), also does not have this property when its size is less than log N. - Notwithstanding the above, we prove a lifting theorem that applies to Index gadgets of any size at least 4 and yields lower bounds for a restricted class of communication protocols in which one of the players is limited to sending parities of its inputs. - Using a modification of the same idea with improved lifting parameters we derive a strong lifting theorem from decision tree size to parity decision tree size. We use this, in turn, to derive a general lifting theorem in proof complexity from tree-resolution size to tree-like Res(?) refutation size, which yields many new exponential lower bounds on such proofs

    Flexible and Robust Counterfactual Explanations with Minimal Satisfiable Perturbations

    Full text link
    Counterfactual explanations (CFEs) exemplify how to minimally modify a feature vector to achieve a different prediction for an instance. CFEs can enhance informational fairness and trustworthiness, and provide suggestions for users who receive adverse predictions. However, recent research has shown that multiple CFEs can be offered for the same instance or instances with slight differences. Multiple CFEs provide flexible choices and cover diverse desiderata for user selection. However, individual fairness and model reliability will be damaged if unstable CFEs with different costs are returned. Existing methods fail to exploit flexibility and address the concerns of non-robustness simultaneously. To address these issues, we propose a conceptually simple yet effective solution named Counterfactual Explanations with Minimal Satisfiable Perturbations (CEMSP). Specifically, CEMSP constrains changing values of abnormal features with the help of their semantically meaningful normal ranges. For efficiency, we model the problem as a Boolean satisfiability problem to modify as few features as possible. Additionally, CEMSP is a general framework and can easily accommodate more practical requirements, e.g., casualty and actionability. Compared to existing methods, we conduct comprehensive experiments on both synthetic and real-world datasets to demonstrate that our method provides more robust explanations while preserving flexibility.Comment: Accepted by CIKM 202

    Mechanical characterization of different epoxy resins enhanced with carbon nanofibers

    Get PDF
    Epoxy with carbon nanofibers (CNFs) are effective nano enhanced materials that can be prepared by easy and low-cost method. The present paper compares the improvements, in terms of flexural and viscoelastic properties, of two epoxy resins reinforced with different weight percentages (wt.%) of CNFs. These epoxy resins have different viscosities, and weight contents between 0% and 1% of CNFs were used to achieve the maximum mechanical properties. Subsequently, for the best configurations obtained, the sensitivity to the strain rate and the viscoelastic behaviour (stress relaxation and creep) were analysed based on international standards. It was possible to conclude that, for both resins, carbon CNFs promote significant improvements in all the studied mechanical properties, even for different contents by weight.   &nbsp

    Circuit Depth Reductions

    Get PDF
    The best known size lower bounds against unrestricted circuits have remained around 3n3n for several decades. Moreover, the only known technique for proving lower bounds in this model, gate elimination, is inherently limited to proving lower bounds of less than 5n5n. In this work, we propose a non-gate-elimination approach for obtaining circuit lower bounds, via certain depth-three lower bounds. We prove that every (unbounded-depth) circuit of size ss can be expressed as an OR of 2s/3.92^{s/3.9} 1616-CNFs. For DeMorgan formulas, the best known size lower bounds have been stuck at around n3−o(1)n^{3-o(1)} for decades. Under a plausible hypothesis about probabilistic polynomials, we show that n4−εn^{4-\varepsilon}-size DeMorgan formulas have 2n1−Ω(ε)2^{n^{1-\Omega(\varepsilon)}}-size depth-3 circuits which are approximate sums of n1−Ω(ε)n^{1-\Omega(\varepsilon)}-degree polynomials over F2{\mathbb F}_2. While these structural results do not immediately lead to new lower bounds, they do suggest new avenues of attack on these longstanding lower bound problems. Our results complement the classical depth-33 reduction results of Valiant, which show that logarithmic-depth circuits of linear size can be computed by an OR of 2εn2^{\varepsilon n} nδn^{\delta}-CNFs, and slightly stronger results for series-parallel circuits. It is known that no purely graph-theoretic reduction could yield interesting depth-3 circuits from circuits of super-logarithmic depth. We overcome this limitation (for small-size circuits) by taking into account both the graph-theoretic and functional properties of circuits and formulas. We show that improvements of the following pseudorandom constructions imply new circuit lower bounds: dispersers for varieties, correlation with constant degree polynomials, matrix rigidity, and hardness for depth-33 circuits with constant bottom fan-in
    • …
    corecore