118 research outputs found

    Redundancy and error resilience in Boolean Networks

    Full text link
    We consider the effect of noise in sparse Boolean Networks with redundant functions. We show that they always exhibit a non-zero error level, and the dynamics undergoes a phase transition from non-ergodicity to ergodicity, as a function of noise, after which the system is no longer capable of preserving a memory if its initial state. We obtain upper-bounds on the critical value of noise for networks of different sparsity.Comment: 4 pages, 5 figure

    Parametrised Complexity of Model Checking and Satisfiability in Propositional Dependence Logic

    Get PDF
    In this paper, we initiate a systematic study of the parametrised complexity in the field of Dependence Logics which finds its origin in the Dependence Logic of V\"a\"an\"anen from 2007. We study a propositional variant of this logic (PDL) and investigate a variety of parametrisations with respect to the central decision problems. The model checking problem (MC) of PDL is NP-complete. The subject of this research is to identify a list of parametrisations (formula-size, treewidth, treedepth, team-size, number of variables) under which MC becomes fixed-parameter tractable. Furthermore, we show that the number of disjunctions or the arity of dependence atoms (dep-arity) as a parameter both yield a paraNP-completeness result. Then, we consider the satisfiability problem (SAT) showing a different picture: under team-size, or dep-arity SAT is paraNP-complete whereas under all other mentioned parameters the problem is in FPT. Finally, we introduce a variant of the satisfiability problem, asking for teams of a given size, and show for this problem an almost complete picture.Comment: Update includes refined result

    Making Classical Ground State Spin Computing Fault-Tolerant

    Full text link
    We examine a model of classical deterministic computing in which the ground state of the classical system is a spatial history of the computation. This model is relevant to quantum dot cellular automata as well as to recent universal adiabatic quantum computing constructions. In its most primitive form, systems constructed in this model cannot compute in an error free manner when working at non-zero temperature. However, by exploiting a mapping between the partition function for this model and probabilistic classical circuits we are able to show that it is possible to make this model effectively error free. We achieve this by using techniques in fault-tolerant classical computing and the result is that the system can compute effectively error free if the temperature is below a critical temperature. We further link this model to computational complexity and show that a certain problem concerning finite temperature classical spin systems is complete for the complexity class Merlin-Arthur. This provides an interesting connection between the physical behavior of certain many-body spin systems and computational complexity.Comment: 24 pages, 1 figur

    Random geometric complexes

    Full text link
    We study the expected topological properties of Cech and Vietoris-Rips complexes built on i.i.d. random points in R^d. We find higher dimensional analogues of known results for connectivity and component counts for random geometric graphs. However, higher homology H_k is not monotone when k > 0. In particular for every k > 0 we exhibit two thresholds, one where homology passes from vanishing to nonvanishing, and another where it passes back to vanishing. We give asymptotic formulas for the expectation of the Betti numbers in the sparser regimes, and bounds in the denser regimes. The main technical contribution of the article is in the application of discrete Morse theory in geometric probability.Comment: 26 pages, 3 figures, final revisions, to appear in Discrete & Computational Geometr

    Algebraic Theory of Promise Constraint Satisfaction Problems, First Steps

    Full text link
    What makes a computational problem easy (e.g., in P, that is, solvable in polynomial time) or hard (e.g., NP-hard)? This fundamental question now has a satisfactory answer for a quite broad class of computational problems, so called fixed-template constraint satisfaction problems (CSPs) -- it has turned out that their complexity is captured by a certain specific form of symmetry. This paper explains an extension of this theory to a much broader class of computational problems, the promise CSPs, which includes relaxed versions of CSPs such as the problem of finding a 137-coloring of a 3-colorable graph

    Critical exponents for random knots

    Full text link
    The size of a zero thickness (no excluded volume) polymer ring is shown to scale with chain length NN in the same way as the size of the excluded volume (self-avoiding) linear polymer, as NνN^{\nu}, where ν0.588\nu \approx 0.588. The consequences of that fact are examined, including sizes of trivial and non-trivial knots.Comment: 4 pages, 0 figure

    Abundance of unknots in various models of polymer loops

    Full text link
    A veritable zoo of different knots is seen in the ensemble of looped polymer chains, whether created computationally or observed in vitro. At short loop lengths, the spectrum of knots is dominated by the trivial knot (unknot). The fractional abundance of this topological state in the ensemble of all conformations of the loop of NN segments follows a decaying exponential form, exp(N/N0) \sim \exp (-N/N_0), where N0N_0 marks the crossover from a mostly unknotted (ie topologically simple) to a mostly knotted (ie topologically complex) ensemble. In the present work we use computational simulation to look closer into the variation of N0N_0 for a variety of polymer models. Among models examined, N0N_0 is smallest (about 240) for the model with all segments of the same length, it is somewhat larger (305) for Gaussian distributed segments, and can be very large (up to many thousands) when the segment length distribution has a fat power law tail.Comment: 13 pages, 6 color figure

    Tightness of slip-linked polymer chains

    Get PDF
    We study the interplay between entropy and topological constraints for a polymer chain in which sliding rings (slip-links) enforce pair contacts between monomers. These slip-links divide a closed ring polymer into a number of sub-loops which can exchange length between each other. In the ideal chain limit, we find the joint probability density function for the sizes of segments within such a slip-linked polymer chain (paraknot). A particular segment is tight (small in size) or loose (of the order of the overall size of the paraknot) depending on both the number of slip-links it incorporates and its competition with other segments. When self-avoiding interactions are included, scaling arguments can be used to predict the statistics of segment sizes for certain paraknot configurations.Comment: 10 pages, 6 figures, REVTeX

    Static-Memory-Hard Functions, and Modeling the Cost of Space vs. Time

    Get PDF
    A series of recent research starting with (Alwen and Serbinenko, STOC 2015) has deepened our understanding of the notion of memory-hardness in cryptography — a useful property of hash functions for deterring large-scale password-cracking attacks — and has shown memory-hardness to have intricate connections with the theory of graph pebbling. Definitions of memory-hardness are not yet unified in the somewhat nascent field of memory-hardness, however, and the guarantees proven to date are with respect to a range of proposed definitions. In this paper, we observe two significant and practical considerations that are not analyzed by existing models of memory-hardness, and propose new models to capture them, accompanied by constructions based on new hard-to-pebble graphs. Our contribution is two-fold, as follows. First, existing measures of memory-hardness only account for dynamic memory usage (i.e., memory read/written at runtime), and do not consider static memory usage (e.g., memory on disk). Among other things, this means that memory requirements considered by prior models are inherently upper-bounded by a hash function’s runtime; in contrast, counting static memory would potentially allow quantification of much larger memory requirements, decoupled from runtime. We propose a new definition of static-memory-hard function (SHF) which takes static memory into account: we model static memory usage by oracle access to a large preprocessed string, which may be considered part of the hash function description. Static memory requirements are complementary to dynamic memory requirements: neither can replace the other, and to deter large-scale password-cracking attacks, a hash function will benefit from being both dynamic memory-hard and static-memory-hard. We give two SHF constructions based on pebbling. To prove static-memory-hardness, we define a new pebble game (“black-magic pebble game”), and new graph constructions with optimal complexity under our proposed measure. Moreover, we provide a prototype implementation of our first SHF construction (which is based on pebbling of a simple “cylinder” graph), providing an initial demonstration of practical feasibility for a limited range of parameter settings. Secondly, existing memory-hardness models implicitly assume that the cost of space and time are more or less on par: they consider only linear ratios between the costs of time and space. We propose a new model to capture nonlinear time-space trade-offs: e.g., how is the adversary impacted when space is quadratically more expensive than time? We prove that nonlinear tradeoffs can in fact cause adversaries to employ different strategies from linear tradeoffs. Finally, as an additional contribution of independent interest, we present an asymptotically tight graph construction that achieves the best possible space complexity up to log log n-factors for an existing memory-hardness measure called cumulative complexity in the sequential pebbling model

    Oblivious tight compaction in O(n) time with smaller constant

    Get PDF
    Oblivious compaction is a crucial building block for hash-based oblivious RAM. Asharov et al. recently gave a O(n) algorithm for oblivious tight compaction. Their algorithm is deterministic and asymptotically optimal, but it is not practical to implement because the implied constant is 238\gg 2^{38}. We give a new algorithm for oblivious tight compaction that runs in time <16014.54n< 16014.54n. As part of our construction, we give a new result in the bootstrap percolation of random regular graphs
    corecore