80 research outputs found

    The complexity of Solitaire

    Get PDF
    AbstractKlondike is the well-known 52-card Solitaire game available on almost every computer. The problem of determining whether an n-card Klondike initial configuration can lead to a win is shown NP-complete. The problem remains NP-complete when only three suits are allowed instead of the usual four. When only two suits of opposite color are available, the problem is shown NL-hard. When the only two suits have the same color, two restrictions are shown in AC0 and in NL respectively. When a single suit is allowed, the problem drops in complexity down to AC0[3], that is, the problem is solvable by a family of constant-depth unbounded-fan-in {and, or, mod3 }-circuits. Other cases are studied: for example, “no King” variant with an arbitrary number of suits of the same color and with an empty “pile” is NL-complete

    Interval-type and affine arithmetic-type techniques for handling uncertainty in expert systems

    Get PDF
    AbstractExpert knowledge consists of statements Sj (facts and rules). The facts and rules are often only true with some probability. For example, if we are interested in oil, we should look at seismic data. If in 90% of the cases, the seismic data were indeed helpful in locating oil, then we can say that if we are interested in oil, then with probability 90% it is helpful to look at the seismic data. In more formal terms, we can say that the implication “if oil then seismic” holds with probability 90%. Another example: a bank A trusts a client B, so if we trust the bank A, we should trust B too; if statistically this trust was justified in 99% of the cases, we can conclude that the corresponding implication holds with probability 99%.If a query Q is deducible from facts and rules, what is the resulting probability p(Q) in Q? We can describe the truth of Q as a propositional formula F in terms of Sj, i.e., as a combination of statements Sj linked by operators like &, ∨, and ¬; computing p(Q) exactly is NP-hard, so heuristics are needed.Traditionally, expert systems use technique similar to straightforward interval computations: we parse F and replace each computation step with corresponding probability operation. Problem: at each step, we ignore the dependence between the intermediate results Fj; hence intervals are too wide. Example: the estimate for P(A∨¬A) is not 1. Solution: similar to affine arithmetic, besides P(Fj), we also compute P(Fj&Fi) (or P(Fj1&⋯&Fjd)), and on each step, use all combinations of l such probabilities to get new estimates. Results: e.g., P(A∨¬A) is estimated as 1

    SPARSE Reduces Conjunctively to TALLY

    Full text link

    Report on COMPLEXITY 1998

    No full text

    Report on Complexity 1997

    No full text

    Human Visual Perception and Kolmogorov Complexity: Revisited

    No full text
    Experiments have shown [2] that we can only memorize images up to a certain complexity level, after which, instead of memorizing the image itself, we, sort of, memorize a probability distribution in terms of which this image is "random" (in the intuitive sense of this word), and next time, we reproduce a "random" sample from this distribution. This random sample may be different from the original image, but since it belongs to the same distribution, it, hopefully, correctly reproduces the statistical characteristics of the original image. The reason why a complex image cannot be accurately memorized is, probably, that our memory is limited. If storing the image itself exhausts this memory, we store its probability distribution instead. With this limitation in mind, we conclude that we cannot store arbitrary probability distributions either, only sufficient simple ones. In this paper, we show that an arbitrary image is indeed either itself simple, or it can be generated by a simple prob..

    mlq header will be provided by the publisher Fast Quantum Algorithms for Handling Probabilistic and Interval Uncertainty

    No full text
    In many real-life situations, we are interested in the value of a physical quantity y that is difficult or impossible to measure directly. To estimate y, we find some easier-to-measure quantities x1,..., xn which are related to y by a known relation y = f(x1,..., xn). Measurements are never 100 % accurate; hence, the measured values ˜xi are different from xi, and the resulting estimate ˜y = f(˜x1,..., ˜xn) is different from the desired value y = f(x1,..., xn). How different can it be? Traditional engineering approach to error estimation in data processing assumes that we know the probabil-def ities of different measurement errors ∆xi = ˜xi − xi. In many practical situations, we only know the upper bound ∆i for this error; hence, after the measurement, the only information that we have about xi is that it def belongs to the interval xi = [˜xi − ∆i, ˜xi + ∆i]. In this case, it is important to find the range y of all possible values of y = f(x1,..., xn) when xi ∈ xi. We start the paper with a brief overview of the computational complexity of the corresponding interval computation problems. Most of the related problems turn out to be, in general, at least NP-hard. In this paper, we show how the use of quantum computing can speed up some computations related to interval and probabilistic uncertainty. We end the paper with speculations on whether (and how) “hypothetic ” physical devices can compute NP-hard problems faster than in exponential time. Most of the paper’s results were first presented at NAFIPS’2003 [30]. Copyright line will be provided by the publisher 1 Introduction: Data Processin

    How to measure loss of privacy

    No full text
    To compare different schemes for preserving privacy, it is important to be able to gauge loss of privacy. Since loss of privacy means that we gain new information about a person, it seems natural to measure the loss of privacy by the amount of information that we gained. However, this seemingly natural definition is not perfect: when we originally know that a person’s salary is between 10,000and10,000 and 20,000 and later learn that the salary is between 10,000and10,000 and 15,000, we gained exactly as much information (one bit) as when we learn that the salary is an even number – however, intuitively, in the first case, we have a substantial privacy loss while in the second case, the privacy loss is minimal. In this paper, we propose a new definition of privacy loss that is in better agreement with our intuition. This new definition is based on estimating worst-case financial losses caused by the loss of privacy.
    corecore