57 research outputs found

    Scheduling Models with Additional Features: Synchronization, Pliability and Resiliency

    Get PDF
    In this thesis we study three new extensions of scheduling models with both practical and theoretical relevance, namely synchronization, pliability and resiliency. Synchronization has previously been studied for flow shop scheduling and we now apply the concept to open shop models for the first time. Here, as opposed to the traditional models, operations that are processed together all have to be started at the same time. Operations that are completed are not removed from the machines until the longest operation in their group is finished. Pliability is a new approach to model flexibility in flow shops and open shops. In scheduling with pliability, parts of the processing load of the jobs can be re-distributed between the machines in order to achieve better schedules. This is applicable, for example, if the machines represent cross-trained workers. Resiliency is a new measure for the quality of a given solution if the input data are uncertain. A resilient solution remains better than some given bound, even if the original input data are changed. The more we can perturb the input data without the solution losing too much quality, the more resilient the solution is. We also consider the assignment problem, as it is the traditional combinatorial optimization problem underlying many scheduling problems. Particularly, we study a version of the assignment problem with a special cost structure derived from the synchronous open shop model and obtain new structural and complexity results. Furthermore we study resiliency for the assignment problem. The main focus of this thesis is the study of structural properties, algorithm development and complexity. For synchronous open shop we show that for a fixed number of machines the makespan can be minimized in polynomial time. All other traditional scheduling objectives are at least as hard to optimize as in the traditional open shop model. Starting out research in pliability we focus on the most general case of the model as well as two relevant special cases. We deliver a fairly complete complexity study for all three versions of the model. Finally, for resiliency, we investigate two different questions: `how to compute the resiliency of a given solution?' and `how to find a most resilient solution?'. We focus on the assignment problem and single machine scheduling to minimize the total sum of completion times and present a number of positive results for both questions. The main goal is to make a case that the concept deserves further study

    Lower Complexity Adaptation for Empirical Entropic Optimal Transport

    Full text link
    Entropic optimal transport (EOT) presents an effective and computationally viable alternative to unregularized optimal transport (OT), offering diverse applications for large-scale data analysis. In this work, we derive novel statistical bounds for empirical plug-in estimators of the EOT cost and show that their statistical performance in the entropy regularization parameter Ï”\epsilon and the sample size nn only depends on the simpler of the two probability measures. For instance, under sufficiently smooth costs this yields the parametric rate n−1/2n^{-1/2} with factor ϔ−d/2\epsilon^{-d/2}, where dd is the minimum dimension of the two population measures. This confirms that empirical EOT also adheres to the lower complexity adaptation principle, a hallmark feature only recently identified for unregularized OT. As a consequence of our theory, we show that the empirical entropic Gromov-Wasserstein distance and its unregularized version for measures on Euclidean spaces also obey this principle. Additionally, we comment on computational aspects and complement our findings with Monte Carlo simulations. Our techniques employ empirical process theory and rely on a dual formulation of EOT over a single function class. Crucial to our analysis is the observation that the entropic cost-transformation of a function class does not increase its uniform metric entropy by much.Comment: 46 pages, 5 figure

    36th International Symposium on Theoretical Aspects of Computer Science: STACS 2019, March 13-16, 2019, Berlin, Germany

    Get PDF

    Flow-based Partitioning and Fast Global Placement in Chip Design

    Get PDF
    VLSI placement is one of the major steps in the chip design process and an interesting subject of research in industry and academia. Recent chips consist of several millions of circuits connected by millions of nets. The classical placement objective of finding positions for circuits and minimizing netlength among them is an ongoing issue in optimization of chip performance. The increasing instance sizes, the tightness of timing and routability constraints impose a real challenge to the design flows and the designers, which often cannot be addressed properly without considering them explicitly within the placement. Many of the complex design methodologies follow an iterative approach, using placement several times in this process. Thus, placement runtime has a severe impact on the turnaround time in chip development. The major contributios of this thesis deal with the global placement, a common relaxation of the placement problem, which computes rough positions of the circuits minimizing the total length of wires to interconnect the. Based on the idea of subsequent quadratic netlength minimization and partitioning, as in BonnPlace [BrennerStruzynaVygen:2008], we present several new algorithms, generalized data structures and a completely new implementation of this top-down placement scheme. We introduce and formalize the concept of movebounds which are position constraints on subsets of cells. Movebounds, which can be regarded as mandatory or soft constraints, provide a mechanism to explicitly incorporate movement constraints to the placement which result from issues of timing, power and routability. With inclusive movebounds, such restrictions can be assigned to groups of circuits without any influence to other placeable objects. The other constraints, namely the exclusive movebounds, are of particular interest for semi-hierarchical approaches, as they can be used to obtain a flat view of the design and prevent cells from being placed into hierarchy units. Both provide a toolbox to the designer and allow the control of particular circuit sets without netlist manipulations. We also present a top-down partitioning scheme and extend the legalization algorithm of [BrennerVygen:2004] to be able to deal with millions of cells and dozens of movebounds efficiently. The presented algorithm can handle different types of overlapping movebounds, even in legalization, and produces significantly better results than a modern industrial tool. We present a novel partitioning algorithm for global placement. Unlike previous iterative and recursive approaches, the new method provides a global view of the problem using a novel MinCostFlow model with extremely fast and highly parallelizable local realization steps. The new flow-based partitioning can address density targets much more accurately and lowers the risk of density violations. The presented MinCostFlow model does not depend on the number of cells, making it highly interesting for large and huge designs. Moreover, the embedded flow structure responds to the chip's floorplan much better than the classical global partitioning approach. Another significant advantage of this algorithm is the fact that it can be applied to any initial placement and guarantees a feasible (fractional) solution (if one exists), improving the tool's reliability, even with movebounds and starting from placements with significant density violations. Using this method we can extend the congestion-driven placement to a combined movement, density adjustment, and cell size inflation approach. This method is able to handle movebounds and guarantees to resolve density overloads properly. Flow-based partitioning creates the opportunity of applying local, density unaware, optimization steps within global placement and allows it to break the strict recursive structure of levels and save runtime. The extended flexibility and runtime improvement are not the only advantages. The proposed flow realization, which is a combination of local quadratic programs and local partitioning, does not only yield a runtime improvement, but also seems to merge connectivity information to partitioning in a much better way than the old recursive partitioning approach. The new flow-based partitioning helps to significantly improve the results of our placement also in terms of netlength. We provide fast data structures for hierarchically clustered netlists and extend the net models Clique and Star to be applied within the clustered netlists efficiently. We show how shared-memory parallelization can be used for speeding up various routines in placement, without the loss of repeatability. In addition, we commit ourselves to the clustering problem, finding circuit groups which should be placed in the vicinity of each other. In order to provide global information for a fast bottom-up clustering, we propose to incorporate connectivity information using random walks. To this end, we show how the hitting times can be efficiently retrieved from large netlist hypergraphs. Due to the proposed model, parallel computation on sparse, shared-memory matrices can be used for computing hitting times to several targets simultaneously. Combined with a bottom-up clustering, even our preliminary approach significantly outperforms the popular BestChoice} algorithm [Nam et al. 2005]. We conclude this thesis by providing several experimental results on a large testbed of real-world chips and benchmarks demonstrating the performance of our tool. Without movebounds, our tool performs as good as a state-of-the-art force directed placer, but is more than 5x faster. We achieve the same speedup over the old BonnPlace, but produce significantly better results, on average more than 8%. With movebounds, our placements are more than 30% shorter compairing to the force-directed placer and our tool is 9x-20x faster. Our tool also produces the best results on the latest ISPD 2006 placement benchmarks

    Metric and Representation Learning

    Full text link
    All data has some inherent mathematical structure. I am interested in understanding the intrinsic geometric and probabilistic structure of data to design effective algorithms and tools that can be applied to machine learning and across all branches of science. The focus of this thesis is to increase the effectiveness of machine learning techniques by developing a mathematical and algorithmic framework using which, given any type of data, we can learn an optimal representation. Representation learning is done for many reasons. It could be done to fix the corruption given corrupted data or to learn a low dimensional or simpler representation, given high dimensional data or a very complex representation of the data. It could also be that the current representation of the data does not capture the important geometric features of the data. One of the many challenges in representation learning is determining ways to judge the quality of the representation learned. In many cases, the consensus is that if d is the natural metric on the representation, then this metric should provide meaningful information about the data. Many examples of this can be seen in areas such as metric learning, manifold learning, and graph embedding. However, most algorithms that solve these problems learn a representation in a metric space first and then extract a metric. A large part of my research is exploring what happens if the order is switched, that is, learn the appropriate metric first and the embedding later. The philosophy behind this approach is that understanding the inherent geometry of the data is the most crucial part of representation learning. Often, studying the properties of the appropriate metric on the input data sets indicates the type of space, we should be seeking for the representation. Hence giving us more robust representations. Optimizing for the appropriate metric can also help overcome issues such as missing and noisy data. My projects fall into three different areas of representation learning. 1) Geometric and probabilistic analysis of representation learning methods. 2) Developing methods to learn optimal metrics on large datasets. 3) Applications. For the category of geometric and probabilistic analysis of representation learning methods, we have three projects. First, designing optimal training data for denoising autoencoders. Second, formulating a new optimal transport problem and understanding the geometric structure. Third, analyzing the robustness to perturbations of the solutions obtained from the classical multidimensional scaling algorithm versus that of the true solutions to the multidimensional scaling problem. For learning optimal metric, we are given a dissimilarity matrix hatDhat{D}, some function ff and some a subset SS of the space of all metrics and we want to find DinSD in S that minimizes f(D,hatD)f(D,hat{D}). In this thesis, we consider the version of the problem when SS is the space of metrics defined on a fixed graph. That is, given a graph GG, we let SS, be the space of all metrics defined via GG. For this SS, we consider the sparse objective function as well as convex objective functions. We also looked at the problem where we want to learn a tree. We also show how the ideas behind learning the optimal metric can be applied to dimensionality reduction in the presence of missing data. Finally, we look at an application to real world data. Specifically trying to reconstruct ancient Greek text.PHDApplied and Interdisciplinary MathematicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169738/1/rsonthal_1.pd
    • 

    corecore