12 research outputs found

    Thecomposition of semi finished inventories at a solid board plant

    Get PDF
    A solid board factory produces rectangular sheets of cardboard in two different formats, namely large formats and small formats. The production process consists of two stages separated by an inventory point. In the first stage a cardboard machine produces the large formats. In the second stage a part of the large formats is cut into small formats by a separate rotary cut machine. Due to very large setup times, technical restrictions, and trim losses, the cardboard machine is not able to produce these small formats. The company follows two policies to satisfy customer demands for rotary cut format orders. When the company applies the first policy, then for each customer order an ‘optimal’ large format (with respect to trim loss) is determined and produced on the cardboard machine. In case of the second policy, a stock of a restricted number of large formats is determined in such a way that the expected trim loss is minimal. The rotary cut format order then uses the most suitable standard large format from the stock. Currently, the dimensions of the standard large formats in the semi finished inventory are based on intuitive motives, with an accent on minimizing trim losses. From the trim loss perspective it is most efficient to produce each rotary cut format from a specific large format. On the other hand, if there is only one large format in each caliper, the variety is minimal, but the trim loss might be inacceptably high. On average, the first policy results in a lower trim loss. In order to make efficiently use of the two machines and to meet customer’s due times the company applies both policies. In this paper we concentrate on the second policy, taking into account the various objectives and restrictions of the company. The purpose of the company is to have not too many different types of large formats and an acceptable amount of trim loss. The problem is formulated as a minimum clique covering problem with alternatives (MCCA), which is presumed to be NP-hard. We solve the problem by using an appropriate heuristic, which is built into a decision support system. Based on a set of real data, the actual composition of semi finished inventories is determined. The paper concludes with computational experiments.

    Efficient Subgraph Similarity Search on Large Probabilistic Graph Databases

    Full text link
    Many studies have been conducted on seeking the efficient solution for subgraph similarity search over certain (deterministic) graphs due to its wide application in many fields, including bioinformatics, social network analysis, and Resource Description Framework (RDF) data management. All these works assume that the underlying data are certain. However, in reality, graphs are often noisy and uncertain due to various factors, such as errors in data extraction, inconsistencies in data integration, and privacy preserving purposes. Therefore, in this paper, we study subgraph similarity search on large probabilistic graph databases. Different from previous works assuming that edges in an uncertain graph are independent of each other, we study the uncertain graphs where edges' occurrences are correlated. We formally prove that subgraph similarity search over probabilistic graphs is #P-complete, thus, we employ a filter-and-verify framework to speed up the search. In the filtering phase,we develop tight lower and upper bounds of subgraph similarity probability based on a probabilistic matrix index, PMI. PMI is composed of discriminative subgraph features associated with tight lower and upper bounds of subgraph isomorphism probability. Based on PMI, we can sort out a large number of probabilistic graphs and maximize the pruning capability. During the verification phase, we develop an efficient sampling algorithm to validate the remaining candidates. The efficiency of our proposed solutions has been verified through extensive experiments.Comment: VLDB201

    Diversifying Top-K Results

    Full text link
    Top-k query processing finds a list of k results that have largest scores w.r.t the user given query, with the assumption that all the k results are independent to each other. In practice, some of the top-k results returned can be very similar to each other. As a result some of the top-k results returned are redundant. In the literature, diversified top-k search has been studied to return k results that take both score and diversity into consideration. Most existing solutions on diversified top-k search assume that scores of all the search results are given, and some works solve the diversity problem on a specific problem and can hardly be extended to general cases. In this paper, we study the diversified top-k search problem. We define a general diversified top-k search problem that only considers the similarity of the search results themselves. We propose a framework, such that most existing solutions for top-k query processing can be extended easily to handle diversified top-k search, by simply applying three new functions, a sufficient stop condition sufficient(), a necessary stop condition necessary(), and an algorithm for diversified top-k search on the current set of generated results, div-search-current(). We propose three new algorithms, namely, div-astar, div-dp, and div-cut to solve the div-search-current() problem. div-astar is an A* based algorithm, div-dp is an algorithm that decomposes the results into components which are searched using div-astar independently and combined using dynamic programming. div-cut further decomposes the current set of generated results using cut points and combines the results using sophisticated operations. We conducted extensive performance studies using two real datasets, enwiki and reuters. Our div-cut algorithm finds the optimal solution for diversified top-k search problem in seconds even for k as large as 2,000.Comment: VLDB201

    Treewidth: computational experiments

    Get PDF

    Diversifying top-k results

    Full text link

    Independent set problems and odd-hole-preserving graph reductions

    Get PDF
    Methods are described that implement a branch-and-price decomposition approach to solve the maximum weight independent set (MWIS) problem. The approach is first described by Warrier et. al, and herein our contributions to this research are presented. The decomposition calls for the exact solution of the MWIS problem on induced subgraphs of the original graph. The focus of our contribution is the use of chordal graphs as the induced subgraphs in this solution framework. Three combinatorial branch-and-bound solvers for the MWIS problem are described. All use weighted clique covers to generate upper bounds, and all branch according to the method of Balas and Yu. One extends and speeds up the method of Babel. A second one modifies a method of Balas and Xue to produce clique covers that share structural similarities with those produced by Babel. Each of these improves on its predecessor. A third solver is a hybrid of the other two. It yields the best known results on some graphs. The related matter of deciding the imperfection or perfection of a graph is also addressed. With the advent of the Strong Perfect Graph Theorem, this problem is reduced to the detection of odd holes and anti-holes or the proof of their absence. Techniques are provided that, for a given graph, find subgraphs in polynomial time that contain odd holes whenever they are present in the given graph. These techniques and some basic structural results on such subgraphs narrow the search for odd holes. Results are reported for the performance of the three new solvers for the MWIS problem that demonstrate that the third, hybrid solver outperforms its clique-cover-based ancestors and, in some cases, the best current open-source solver. The techniques for narrowing the search for odd holes are shown to provide a polynomial-time reduction in the size of the input required to decide the perfection or imperfection of a graph

    A branch, price, and cut approach to solving the maximum weighted independent set problem

    Get PDF
    The maximum weight-independent set problem (MWISP) is one of the most well-known and well-studied NP-hard problems in the field of combinatorial optimization. In the first part of the dissertation, I explore efficient branch-and-price (B&P) approaches to solve MWISP exactly. B&P is a useful integer-programming tool for solving NP-hard optimization problems. Specifically, I look at vertex- and edge-disjoint decompositions of the underlying graph. MWISPâÂÂs on the resulting subgraphs are less challenging, on average, to solve. I use the B&P framework to solve MWISP on the original graph G using these specially constructed subproblems to generate columns. I demonstrate that vertex-disjoint partitioning scheme gives an effective approach for relatively sparse graphs. I also show that the edge-disjoint approach is less effective than the vertex-disjoint scheme because the associated DWD reformulation of the latter entails a slow rate of convergence. In the second part of the dissertation, I address convergence properties associated with Dantzig-Wolfe Decomposition (DWD). I discuss prevalent methods for improving the rate of convergence of DWD. I also implement specific methods in application to the edge-disjoint B&P scheme and show that these methods improve the rate of convergence. In the third part of the dissertation, I focus on identifying new cut-generation methods within the B&P framework. Such methods have not been explored in the literature. I present two new methodologies for generating generic cutting planes within the B&P framework. These techniques are not limited to MWISP and can be used in general applications of B&P. The first methodology generates cuts by identifying faces (facets) of subproblem polytopes and lifting associated inequalities; the second methodology computes Lift-and-Project (L&P) cuts within B&P. I successfully demonstrate the feasibility of both approaches and present preliminary computational tests of each

    Low-Diameter Clusters in Network Analysis

    Get PDF
    In this dissertation, we introduce several novel tools for cluster-based analysis of complex systems and design solution approaches to solve the corresponding optimization problems. Cluster-based analysis is a subfield of network analysis which utilizes a graph representation of a system to yield meaningful insight into the system structure and functions. Clusters with low diameter are commonly used to characterize cohesive groups in applications for which easy reachability between group members is of high importance. Low-diameter clusters can be mathematically formalized using a clique and an s-club (with relatively small values of s), two concepts from graph theory. A clique is a subset of vertices adjacent to each other and an s-club is a subset of vertices inducing a subgraph with a diameter of at most s. A clique is actually a special case of an s-club with s = 1, hence, having the shortest possible diameter. Two topics of this dissertation focus on graphs prone to uncertainty and disruptions, and introduce several extensions of low-diameter models. First, we introduce a robust clique model in graphs where edges may fail with a certain probability and robustness is enforced using appropriate risk measures. With regard to its ability to capture underlying system uncertainties, finding the largest robust clique is a better alternative to the problem of finding the largest clique. Moreover, it is also a hard combinatorial optimization problem, requiring some effective solution techniques. To this aim, we design several heuristic approaches for detection of large robust cliques and compare their performance. Next, we consider graphs for which uncertainty is not explicitly defined, studying connectivity properties of 2-clubs. We notice that a 2-club can be very vulnerable to disruptions, so we enhance it by reinforcing additional requirements on connectivity and introduce a biconnected 2-club concept. Additionally, we look at the weak 2-club counterpart which we call a fragile 2-club (defined as a 2-club that is not biconnected). The size of the largest biconnected 2-club in a graph can help measure overall system reachability and connectivity, whereas the largest fragile 2-club can identify vulnerable parts of the graph. We show that the problem of finding the largest fragile 2-club is polynomially solvable whereas the problem of finding the largest biconnected 2-club is NP-hard. Furthermore, for the former, we design a polynomial time algorithm and for the latter - combinatorial branch-and-bound and branch-and-cut algorithms. Lastly, we once again consider the s-club concept but shift our focus from finding the largest s-club in a graph to the problem of partitioning the graph into the smallest number of non-overlapping s-clubs. This problem cannot only be applied to derive communities in the graph, but also to reduce the size of the graph and derive its hierarchical structure. The problem of finding the minimum s-club partitioning is a hard combinatorial optimization problem with proven complexity results and is also very hard to solve in practice. We design a branch-and-bound combinatorial optimization algorithm and test it on the problem of minimum 2-club partitioning
    corecore