25,774 research outputs found

    Design of Sequence Family Subsets Using a Branch and Bound Technique

    Get PDF

    Dynamic Programming for Graphs on Surfaces

    Get PDF
    We provide a framework for the design and analysis of dynamic programming algorithms for surface-embedded graphs on n vertices and branchwidth at most k. Our technique applies to general families of problems where standard dynamic programming runs in 2^{O(k log k)} n steps. Our approach combines tools from topological graph theory and analytic combinatorics. In particular, we introduce a new type of branch decomposition called "surface cut decomposition", generalizing sphere cut decompositions of planar graphs introduced by Seymour and Thomas, which has nice combinatorial properties. Namely, the number of partial solutions that can be arranged on a surface cut decomposition can be upper-bounded by the number of non-crossing partitions on surfaces with boundary. It follows that partial solutions can be represented by a single-exponential (in the branchwidth k) number of configurations. This proves that, when applied on surface cut decompositions, dynamic programming runs in 2^{O(k)} n steps. That way, we considerably extend the class of problems that can be solved in running times with a single-exponential dependence on branchwidth and unify/improve most previous results in this direction.Comment: 28 pages, 3 figure

    Branch-and-lift algorithm for deterministic global optimization in nonlinear optimal control

    Get PDF
    This paper presents a branch-and-lift algorithm for solving optimal control problems with smooth nonlinear dynamics and potentially nonconvex objective and constraint functionals to guaranteed global optimality. This algorithm features a direct sequential method and builds upon a generic, spatial branch-and-bound algorithm. A new operation, called lifting, is introduced, which refines the control parameterization via a Gram-Schmidt orthogonalization process, while simultaneously eliminating control subregions that are either infeasible or that provably cannot contain any global optima. Conditions are given under which the image of the control parameterization error in the state space contracts exponentially as the parameterization order is increased, thereby making the lifting operation efficient. A computational technique based on ellipsoidal calculus is also developed that satisfies these conditions. The practical applicability of branch-and-lift is illustrated in a numerical example. © 2013 Springer Science+Business Media New York

    A survey of the state of the art and focused research in range systems, task 2

    Get PDF
    Many communication, control, and information processing subsystems are modeled by linear systems incorporating tapped delay lines (TDL). Such optimized subsystems result in full precision multiplications in the TDL. In order to reduce complexity and cost in a microprocessor implementation, these multiplications can be replaced by single-shift instructions which are equivalent to powers of two multiplications. Since, in general, the obvious operation of rounding the infinite precision TDL coefficients to the nearest powers of two usually yield quite poor system performance, the optimum powers of two coefficient solution was considered. Detailed explanations on the use of branch-and-bound algorithms for finding the optimum powers of two solutions are given. Specific demonstration of this methodology to the design of a linear data equalizer and its implementation in assembly language on a 8080 microprocessor with a 12 bit A/D converter are reported. This simple microprocessor implementation with optimized TDL coefficients achieves a system performance comparable to the optimum linear equalization with full precision multiplications for an input data rate of 300 baud. The philosophy demonstrated in this implementation is dully applicable to many other microprocessor controlled information processing systems

    Optimal modeling for complex system design

    Get PDF
    The article begins with a brief introduction to the theory describing optimal data compression systems and their performance. A brief outline is then given of a representative algorithm that employs these lessons for optimal data compression system design. The implications of rate-distortion theory for practical data compression system design is then described, followed by a description of the tensions between theoretical optimality and system practicality and a discussion of common tools used in current algorithms to resolve these tensions. Next, the generalization of rate-distortion principles to the design of optimal collections of models is presented. The discussion focuses initially on data compression systems, but later widens to describe how rate-distortion theory principles generalize to model design for a wide variety of modeling applications. The article ends with a discussion of the performance benefits to be achieved using the multiple-model design algorithms

    An optimization framework for solving capacitated multi-level lot-sizing problems with backlogging

    Get PDF
    This paper proposes two new mixed integer programming models for capacitated multi-level lot-sizing problems with backlogging, whose linear programming relaxations provide good lower bounds on the optimal solution value. We show that both of these strong formulations yield the same lower bounds. In addition to these theoretical results, we propose a new, effective optimization framework that achieves high quality solutions in reasonable computational time. Computational results show that the proposed optimization framework is superior to other well-known approaches on several important performance dimensions

    Learning to Optimize Computational Resources: Frugal Training with Generalization Guarantees

    Full text link
    Algorithms typically come with tunable parameters that have a considerable impact on the computational resources they consume. Too often, practitioners must hand-tune the parameters, a tedious and error-prone task. A recent line of research provides algorithms that return nearly-optimal parameters from within a finite set. These algorithms can be used when the parameter space is infinite by providing as input a random sample of parameters. This data-independent discretization, however, might miss pockets of nearly-optimal parameters: prior research has presented scenarios where the only viable parameters lie within an arbitrarily small region. We provide an algorithm that learns a finite set of promising parameters from within an infinite set. Our algorithm can help compile a configuration portfolio, or it can be used to select the input to a configuration algorithm for finite parameter spaces. Our approach applies to any configuration problem that satisfies a simple yet ubiquitous structure: the algorithm's performance is a piecewise constant function of its parameters. Prior research has exhibited this structure in domains from integer programming to clustering
    • …
    corecore