46,085 research outputs found

    Proteus: A Hierarchical Portfolio of Solvers and Transformations

    Full text link
    In recent years, portfolio approaches to solving SAT problems and CSPs have become increasingly common. There are also a number of different encodings for representing CSPs as SAT instances. In this paper, we leverage advances in both SAT and CSP solving to present a novel hierarchical portfolio-based approach to CSP solving, which we call Proteus, that does not rely purely on CSP solvers. Instead, it may decide that it is best to encode a CSP problem instance into SAT, selecting an appropriate encoding and a corresponding SAT solver. Our experimental evaluation used an instance of Proteus that involved four CSP solvers, three SAT encodings, and six SAT solvers, evaluated on the most challenging problem instances from the CSP solver competitions, involving global and intensional constraints. We show that significant performance improvements can be achieved by Proteus obtained by exploiting alternative view-points and solvers for combinatorial problem-solving.Comment: 11th International Conference on Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems. The final publication is available at link.springer.co

    Data-Driven Approach to Simulating Realistic Human Joint Constraints

    Full text link
    Modeling realistic human joint limits is important for applications involving physical human-robot interaction. However, setting appropriate human joint limits is challenging because it is pose-dependent: the range of joint motion varies depending on the positions of other bones. The paper introduces a new technique to accurately simulate human joint limits in physics simulation. We propose to learn an implicit equation to represent the boundary of valid human joint configurations from real human data. The function in the implicit equation is represented by a fully connected neural network whose gradients can be efficiently computed via back-propagation. Using gradients, we can efficiently enforce realistic human joint limits through constraint forces in a physics engine or as constraints in an optimization problem.Comment: To appear at ICRA 2018; 6 pages, 9 figures; for associated video, see https://youtu.be/wzkoE7wCbu

    Mapping constrained optimization problems to quantum annealing with application to fault diagnosis

    Get PDF
    Current quantum annealing (QA) hardware suffers from practical limitations such as finite temperature, sparse connectivity, small qubit numbers, and control error. We propose new algorithms for mapping boolean constraint satisfaction problems (CSPs) onto QA hardware mitigating these limitations. In particular we develop a new embedding algorithm for mapping a CSP onto a hardware Ising model with a fixed sparse set of interactions, and propose two new decomposition algorithms for solving problems too large to map directly into hardware. The mapping technique is locally-structured, as hardware compatible Ising models are generated for each problem constraint, and variables appearing in different constraints are chained together using ferromagnetic couplings. In contrast, global embedding techniques generate a hardware independent Ising model for all the constraints, and then use a minor-embedding algorithm to generate a hardware compatible Ising model. We give an example of a class of CSPs for which the scaling performance of D-Wave's QA hardware using the local mapping technique is significantly better than global embedding. We validate the approach by applying D-Wave's hardware to circuit-based fault-diagnosis. For circuits that embed directly, we find that the hardware is typically able to find all solutions from a min-fault diagnosis set of size N using 1000N samples, using an annealing rate that is 25 times faster than a leading SAT-based sampling method. Further, we apply decomposition algorithms to find min-cardinality faults for circuits that are up to 5 times larger than can be solved directly on current hardware.Comment: 22 pages, 4 figure

    Stochastic Nonlinear Model Predictive Control with Efficient Sample Approximation of Chance Constraints

    Full text link
    This paper presents a stochastic model predictive control approach for nonlinear systems subject to time-invariant probabilistic uncertainties in model parameters and initial conditions. The stochastic optimal control problem entails a cost function in terms of expected values and higher moments of the states, and chance constraints that ensure probabilistic constraint satisfaction. The generalized polynomial chaos framework is used to propagate the time-invariant stochastic uncertainties through the nonlinear system dynamics, and to efficiently sample from the probability densities of the states to approximate the satisfaction probability of the chance constraints. To increase computational efficiency by avoiding excessive sampling, a statistical analysis is proposed to systematically determine a-priori the least conservative constraint tightening required at a given sample size to guarantee a desired feasibility probability of the sample-approximated chance constraint optimization problem. In addition, a method is presented for sample-based approximation of the analytic gradients of the chance constraints, which increases the optimization efficiency significantly. The proposed stochastic nonlinear model predictive control approach is applicable to a broad class of nonlinear systems with the sufficient condition that each term is analytic with respect to the states, and separable with respect to the inputs, states and parameters. The closed-loop performance of the proposed approach is evaluated using the Williams-Otto reactor with seven states, and ten uncertain parameters and initial conditions. The results demonstrate the efficiency of the approach for real-time stochastic model predictive control and its capability to systematically account for probabilistic uncertainties in contrast to a nonlinear model predictive control approaches.Comment: Submitted to Journal of Process Contro

    Improper Learning by Refuting

    Get PDF
    The sample complexity of learning a Boolean-valued function class is precisely characterized by its Rademacher complexity. This has little bearing, however, on the sample complexity of efficient agnostic learning. We introduce refutation complexity, a natural computational analog of Rademacher complexity of a Boolean concept class and show that it exactly characterizes the sample complexity of efficient agnostic learning. Informally, refutation complexity of a class C is the minimum number of example-label pairs required to efficiently distinguish between the case that the labels correlate with the evaluation of some member of C (structure) and the case where the labels are i.i.d. Rademacher random variables (noise). The easy direction of this relationship was implicitly used in the recent framework for improper PAC learning lower bounds of Daniely and co-authors via connections to the hardness of refuting random constraint satisfaction problems. Our work can be seen as making the relationship between agnostic learning and refutation implicit in their work into an explicit equivalence. In a recent, independent work, Salil Vadhan discovered a similar relationship between refutation and PAC-learning in the realizable (i.e. noiseless) case

    Models for robust resource allocation in project scheduling.

    Get PDF
    The vast majority of resource-constrained project scheduling efforts assumes complete information about the scheduling problem to be solved and a static deterministic environment within which the pre-computed baseline schedule will be executed. In reality, however, project activities are subject to considerable uncertainty which generally leads to numerous schedule disruptions. In this paper, we present a resource allocation model that protects the makespan of a given baseline schedule against activity duration variability. A branch-and-bound algorithm is developed that solves the proposed robust resource allocation problem in exact and approximate formulations. The procedure relies on constraint propagation during its search. We report on computational results obtained on a set of benchmark problems.Model; Resource allocation; Scheduling;
    corecore