208 research outputs found

    Sweep as a Generic Pruning Technique Applied to the Non-Overlapping Rectangles Constraint

    Get PDF
    We first present a generic pruning technique which aggregates several constraints sharing some variables. The method is derived from an idea called \dfn{sweep} which is extensively used in computational geometry. A first benefit of this technique comes from the fact that it can be applied on several families of global constraints. A second main advantage is that it does not lead to any memory consumption problem since it only requires temporary memory that can be reclaimed after each invocation of the method. We then specialize this technique to the non-overlapping rectangles constraint, describe several optimizations, and give an empirical evaluation based on six sets of test instances of different pattern

    Mixed integer-linear formulations of cumulative scheduling constraints - A comparative study

    Get PDF
    This paper introduces two MILP models for the cumulative scheduling constraint and associated pre-processing filters. We compare standard solver performance for these models on three sets of problems and for two of them, where tasks have unitary resource consumption, we also compare them with two models based on a geometric placement constraint. In the experiments, the solver performance of one of the cumulative models, is clearly the best and is also shown to scale very well for a large scale industrial transportation scheduling problem

    Synchronized sweep algorithms for scalable scheduling constraints

    Get PDF
    This report introduces a family of synchronized sweep based filtering algorithms for handling scheduling problems involving resource and precedence constraints. The key idea is to filter all constraints of a scheduling problem in a synchronized way in order to scale better. In addition to normal filtering mode, the algorithms can run in greedy mode, in which case they perform a greedy assignment of start and end times. The filtering mode achieves a significant speed-up over the decomposition into independent cumulative and precedence constraints, while the greedy mode can handle up to 1 million tasks with 64 resources constraints and 2 million precedences. These algorithms were implemented in both CHOCO and SICStus

    A Balanced Solution for the Partition-based Spatial Merge join in MapReduce

    Get PDF
    Several MapReduce frameworks have been developed in recent years in order to cope with the need to process an increasing amount of data. Moreover, some extensions of them have been proposed to deal with particular kind of information, like the spatial one. In this paper we will refer to SpatialHadoop, a spatial extension of Apache Hadoop which provides a rich set of spatial data types and operations. In the geo-spatial domain, spatial join is considered a fundamental operation for performing data analysis. However, the join operation is generally classified as a critical task to be performed in MapReduce, since it requires to process two datasets at time. Several different solutions have been proposed in literature for efficiently performing a spatial join which may or may not require the presence of a spatial index computed on both datasets or only one of them. As already discussed in literature, the efficiency of such operation depends on the ability to both prune unnecessary data as soon as possible and to provide a balanced amount of work to be done by each parallelly executed task. In this paper,we take a step forward in this direction by proposing an evolution of the Partition-based Spatial Merge Join algorithm which tries to completely exploit the benefit of the parallelism induced by the MapReduce framework. In particular, we concentrate on the partition phase which has to produce filtered balanced and meaningful subdivisions of the original datasets

    A global constraint for total weighted completion time for cumulative resources

    Get PDF
    The criterion of total weighted completion time occurs as a sub-problem of combinatorial optimization problems in such diverse areas as scheduling, container loading and storage assignment in warehouses. These applications often necessitate considering a rich set of requirements and preferences, which makes constraint programming (CP) an effective modeling and solving approach. On the other hand, basic CP techniques can be inefficient in solving models that require inference over sum type expressions. In this paper, we address increasing the solution efficiency of constraint-based approaches to cumulative resource scheduling with the above criterion. Extending previous results for unary capacity resources, we define the COMPLETIONm global constraint for propagating the total weighted completion time of activities that require the same cumulative resource. We present empirical results in two different problem domains: scheduling a single cumulative resource, and container loading with constraints on the location of the center of gravity. In both domains, the proposed constraint propagation algorithm out-performs existing propagation techniques

    Synchronized sweep algorithms for scalable scheduling constraints

    Get PDF
    This report introduces a family of synchronized sweep based filtering algorithms for handling scheduling problems involving resource and precedence constraints. The key idea is to filter all constraints of a scheduling problem in a synchronized way in order to scale better. In addition to normal filtering mode, the algorithms can run in greedy mode, in which case they perform a greedy assignment of start and end times. The filtering mode achieves a significant speed-up over the decomposition into independent cumulative and precedence constraints, while the greedy mode can handle up to 1 million tasks with 64 resources constraints and 2 million precedences. These algorithms were implemented in both CHOCO and SICStus

    Timing-Driven Macro Placement

    Get PDF
    Placement is an important step in the process of finding physical layouts for electronic computer chips. The basic task during placement is to arrange the building blocks of the chip, the circuits, disjointly within a given chip area. Furthermore, such positions should result in short circuit interconnections which can be routed easily and which ensure all signals arrive in time. This dissertation mostly focuses on macros, the largest circuits on a chip. In order to optimize timing characteristics during macro placement, we propose a new optimistic timing model based on geometric distance constraints. This model can be computed and evaluated efficiently in order to predict timing traits accurately in practice. Packing rectangles disjointly remains strongly NP-hard under slack maximization in our timing model. Despite of this we develop an exact, linear time algorithm for special cases. The proposed timing model is incorporated into BonnMacro, the macro placement component of the BonnTools physical design optimization suite developed at the Research Institute for Discrete Mathematics. Using efficient formulations as mixed-integer programs we can legalize macros locally while optimizing timing. This results in the first timing-aware macro placement tool. In addition, we provide multiple enhancements for the partitioning-based standard circuit placement algorithm BonnPlace. We find a model of partitioning as minimum-cost flow problem that is provably as small as possible using which we can avoid running time intensive instances. Moreover we propose the new global placement flow Self-Stabilizing BonnPlace. This approach combines BonnPlace with a force-directed placement framework. It provides the flexibility to optimize the two involved objectives, routability and timing, directly during placement. The performance of our placement tools is confirmed on a large variety of academic benchmarks as well as real-world designs provided by our industrial partner IBM. We reduce running time of partitioning significantly and demonstrate that Self-Stabilizing BonnPlace finds easily routable placements for challenging designs – even when simultaneously optimizing timing objectives. BonnMacro and Self-Stabilizing BonnPlace can be combined to the first timing-driven mixed-size placement flow. This combination often finds placements with competitive timing traits and even outperforms solutions that have been determined manually by experienced designers

    Efficient Large-scale Distance-Based Join Queries in SpatialHadoop

    Get PDF
    Efficient processing of Distance-Based Join Queries (DBJQs) in spatial databases is of paramount importance in many application domains. The most representative and known DBJQs are the K Closest Pairs Query (KCPQ) and the ε Distance Join Query (εDJQ). These types of join queries are characterized by a number of desired pairs (K) or a distance threshold (ε) between the components of the pairs in the final result, over two spatial datasets. Both are expensive operations, since two spatial datasets are combined with additional constraints. Given the increasing volume of spatial data originating from multiple sources and stored in distributed servers, it is not always efficient to perform DBJQs on a centralized server. For this reason, this paper addresses the problem of computing DBJQs on big spatial datasets in SpatialHadoop, an extension of Hadoop that supports efficient processing of spatial queries in a cloud-based setting. We propose novel algorithms, based on plane-sweep, to perform efficient parallel DBJQs on large-scale spatial datasets in Spatial Hadoop. We evaluate the performance of the proposed algorithms in several situations with large real-world as well as synthetic datasets. The experiments demonstrate the efficiency and scalability of our proposed methodologies

    Efficient Analysis in Multimedia Databases

    Get PDF
    The rapid progress of digital technology has led to a situation where computers have become ubiquitous tools. Now we can find them in almost every environment, be it industrial or even private. With ever increasing performance computers assumed more and more vital tasks in engineering, climate and environmental research, medicine and the content industry. Previously, these tasks could only be accomplished by spending enormous amounts of time and money. By using digital sensor devices, like earth observation satellites, genome sequencers or video cameras, the amount and complexity of data with a spatial or temporal relation has gown enormously. This has led to new challenges for the data analysis and requires the use of modern multimedia databases. This thesis aims at developing efficient techniques for the analysis of complex multimedia objects such as CAD data, time series and videos. It is assumed that the data is modeled by commonly used representations. For example CAD data is represented as a set of voxels, audio and video data is represented as multi-represented, multi-dimensional time series. The main part of this thesis focuses on finding efficient methods for collision queries of complex spatial objects. One way to speed up those queries is to employ a cost-based decompositioning, which uses interval groups to approximate a spatial object. For example, this technique can be used for the Digital Mock-Up (DMU) process, which helps engineers to ensure short product cycles. This thesis defines and discusses a new similarity measure for time series called threshold-similarity. Two time series are considered similar if they expose a similar behavior regarding the transgression of a given threshold value. Another part of the thesis is concerned with the efficient calculation of reverse k-nearest neighbor (RkNN) queries in general metric spaces using conservative and progressive approximations. The aim of such RkNN queries is to determine the impact of single objects on the whole database. At the end, the thesis deals with video retrieval and hierarchical genre classification of music using multiple representations. The practical relevance of the discussed genre classification approach is highlighted with a prototype tool that helps the user to organize large music collections. Both the efficiency and the effectiveness of the presented techniques are thoroughly analyzed. The benefits over traditional approaches are shown by evaluating the new methods on real-world test datasets

    Cartographic modelling for automated map generation

    Get PDF
    corecore