1,443 research outputs found

    Pac-Learning Recursive Logic Programs: Efficient Algorithms

    Full text link
    We present algorithms that learn certain classes of function-free recursive logic programs in polynomial time from equivalence queries. In particular, we show that a single k-ary recursive constant-depth determinate clause is learnable. Two-clause programs consisting of one learnable recursive clause and one constant-depth determinate non-recursive clause are also learnable, if an additional ``basecase'' oracle is assumed. These results immediately imply the pac-learnability of these classes. Although these classes of learnable recursive programs are very constrained, it is shown in a companion paper that they are maximally general, in that generalizing either class in any natural way leads to a computationally difficult learning problem. Thus, taken together with its companion paper, this paper establishes a boundary of efficient learnability for recursive logic programs.Comment: See http://www.jair.org/ for any accompanying file

    On the Cryptographic Hardness of Local Search

    Get PDF
    We show new hardness results for the class of Polynomial Local Search problems (PLS): - Hardness of PLS based on a falsifiable assumption on bilinear groups introduced by Kalai, Paneth, and Yang (STOC 2019), and the Exponential Time Hypothesis for randomized algorithms. Previous standard model constructions relied on non-falsifiable and non-standard assumptions. - Hardness of PLS relative to random oracles. The construction is essentially different than previous constructions, and in particular is unconditionally secure. The construction also demonstrates the hardness of parallelizing local search. The core observation behind the results is that the unique proofs property of incrementally-verifiable computations previously used to demonstrate hardness in PLS can be traded with a simple incremental completeness property

    Towards an interpreter for efficient encrypted computation

    Get PDF
    Fully homomorphic encryption (FHE) techniques are capable of performing encrypted computation on Boolean circuits, i.e., the user specifies encrypted inputs to the program, and the server computes on the encrypted inputs. Applying these techniques to general programs with recursive procedures and data-dependent loops has not been a focus of attention. In this paper, we take a first step toward building an interpreter that, given programs with complex control flow, schedules efficient code suitable for the application of FHE schemes. We first describe how programs written in a small Turing-complete instruction set can be executed with encrypted data and point out inefficiencies in this methodology. We then provide examples of scheduling (a) the greatest common divisor (GCD) problem using Euclid's algorithm and (b) the 3-Satisfiability (3SAT) problem using a recursive backtracking algorithm into path-levelized FHE computations. We describe how path levelization reduces control flow ambiguity and improves encrypted computation efficiency. Using these techniques and data-dependent loops as a starting point, we then build support for hierarchical programs made up of phases, where each phase corresponds to a fixed point computation that can be used to further improve the efficiency of encrypted computation. In our setting, the adversary learns an estimate of the number of steps required to complete the computation, which we show is the least amount of leakage possible

    Multi-Echelon Inventory Optimization and Demand-Side Management: Models and Algorithms

    Get PDF
    Inventory management is a fudamental problem in supply chain management. It is widely used in practice, but it is also intrinsically hard to optimize, even for relatively simple inventory system structures. This challenge has also been heightened under the threat of supply disruptions. Whenever a supply source is disrupted, the inventory system is paralyzed, and tremenduous costs can occur as a consequence. Designing a reliable and robust inventory system that can withstand supply disruptions is vital for an inventory system\u27s performance.First we consider a basic type of inventory network, an assembly system, which produces a single end product from one or several components. A property called long-run balance allows an assembly system to be reduced to a serial system when disruptions are not present. We show that a modified version is still true under disruption risk. Based on this property, we propose a method for reducing the system into a serial system with extra inventory at certain stages that face supply disruptions. We also propose a heuristic for solving the reduced system. A numerical study shows that this heuristic performs very well, yielding significant cost savings when compared with the best-known algorithm.Next we study another basic inventory network structure, a distribution system. We study continuous-review, multi-echelon distribution systems subject to supply disruptions, with Poisson customer demands under a first-come, first-served allocation policy. We develop a recursive optimization heuristic, which applies a bottom-up approach that sequentially approximates the base-stock levels of all the locations. Our numerical study shows that it performs very well.Finally we consider a problem related to smart grids, an area where supply and demand are still decisive factors. Instead of matching supply with demand, as in the first two parts of the dissertation, now we concentrate on the interaction between supply and demand. We consider an electricity service provider that wishes to set prices for a large customer (user or aggregator) with flexible loads so that the resulting load profile matches a predetermined profile as closely as possible. We model the deterministic demand case as a bilevel problem in which the service provider sets price coefficients and the customer responds by shifting loads forward in time. We derive optimality conditions for the lower-level problem to obtain a single-level problem that can be solved efficiently. For the stochastic-demand case, we approximate the consumer\u27s best response function and use this approximation to calculate the service provider\u27s optimal strategy. Our numerical study shows the tractability of the new models for both the deterministic and stochastic cases, and that our pricing scheme is very effective for the service provider to shape consumer demand

    Multi Layer Peeling for Linear Arrangement and Hierarchical Clustering

    Full text link
    We present a new multi-layer peeling technique to cluster points in a metric space. A well-known non-parametric objective is to embed the metric space into a simpler structured metric space such as a line (i.e., Linear Arrangement) or a binary tree (i.e., Hierarchical Clustering). Points which are close in the metric space should be mapped to close points/leaves in the line/tree; similarly, points which are far in the metric space should be far in the line or on the tree. In particular we consider the Maximum Linear Arrangement problem \cite{Approximation_algorithms_for_maximum_linear_arrangement} and the Maximum Hierarchical Clustering problem \cite{Hierarchical_Clustering:_Objective_Functions_and_Algorithms} applied to metrics. We design approximation schemes (1−ϵ1 - \epsilon approximation for any constant ϵ>0\epsilon > 0) for these objectives. In particular this shows that by considering metrics one may significantly improve former approximations (0.50.5 for Max Linear Arrangement and 0.740.74 for Max Hierarchical Clustering). Our main technique, which is called multi-layer peeling, consists of recursively peeling off points which are far from the "core" of the metric space. The recursion ends once the core becomes a sufficiently densely weighted metric space (i.e. the average distance is at least a constant times the diameter) or once it becomes negligible with respect to its inner contribution to the objective. Interestingly, the algorithm in the Linear Arrangement case is much more involved than that in the Hierarchical Clustering case, and uses a significantly more delicate peeling

    Multi Layer Peeling for Linear Arrangement and Hierarchical Clustering

    Get PDF
    We present a new multi-layer peeling technique to cluster points in a metric space. A well-known non-parametric objective is to embed the metric space into a simpler structured metric space such as a line (i.e., Linear Arrangement) or a binary tree (i.e., Hierarchical Clustering). Points which are close in the metric space should be mapped to close points/leaves in the line/tree; similarly, points which are far in the metric space should be far in the line or on the tree. In particular we consider the Maximum Linear Arrangement problem [Refael Hassin and Shlomi Rubinstein, 2001] and the Maximum Hierarchical Clustering problem [Vincent Cohen-Addad et al., 2018] applied to metrics. We design approximation schemes (1-? approximation for any constant ? > 0) for these objectives. In particular this shows that by considering metrics one may significantly improve former approximations (0.5 for Max Linear Arrangement and 0.74 for Max Hierarchical Clustering). Our main technique, which is called multi-layer peeling, consists of recursively peeling off points which are far from the "core" of the metric space. The recursion ends once the core becomes a sufficiently densely weighted metric space (i.e. the average distance is at least a constant times the diameter) or once it becomes negligible with respect to its inner contribution to the objective. Interestingly, the algorithm in the Linear Arrangement case is much more involved than that in the Hierarchical Clustering case, and uses a significantly more delicate peeling
    • …
    corecore