138 research outputs found

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    LIPIcs, Volume 274, ESA 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 274, ESA 2023, Complete Volum

    LIPIcs, Volume 258, SoCG 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 258, SoCG 2023, Complete Volum

    Derivative-Free Optimization with Proxy Models for Oil Production Platforms Sharing a Subsea Gas Network

    Get PDF
    The deployment of offshore platforms for the extraction of oil and gas from subsea reservoirs presents unique challenges, particularly when multiple platforms are connected by a subsea gas network. In the Santos basin, the aim is to maximize oil production while maintaining safe and sustainable levels of CO2 content and pressure in the gas stream. To address these challenges, a novel methodology has been proposed that uses boundary conditions to coordinate the use of shared resources among the platforms. This approach decouples the optimization of oil production in platforms from the coordination of shared resources, allowing for more efficient and effective operation of the offshore oilfield. In addition to this methodology, a fast and accurate proxy model has been developed for gas pipeline networks. This model allows for efficient optimization of the gas flow through the network, taking into account the physical and operational constraints of the system. In experiments, the use of the proposed proxy model in tandem with derivativefree optimization algorithms resulted in an average error of less than 5% in pressure calculations, and a processing time that was over up to 1000 times faster than the phenomenological simulator. These results demonstrate the effectiveness and efficiency of the proposed methodology in optimizing oil production in offshore platforms connected by a subsea gas network, while maintaining safe and sustainable levels of CO2 content and pressure in the gas stream.N/

    Engineering Algorithms for Dynamic and Time-Dependent Route Planning

    Get PDF
    Efficiently computing shortest paths is an essential building block of many mobility applications, most prominently route planning/navigation devices and applications. In this thesis, we apply the algorithm engineering methodology to design algorithms for route planning in dynamic (for example, considering real-time traffic) and time-dependent (for example, considering traffic predictions) problem settings. We build on and extend the popular Contraction Hierarchies (CH) speedup technique. With a few minutes of preprocessing, CH can optimally answer shortest path queries on continental-sized road networks with tens of millions of vertices and edges in less than a millisecond, i.e. around four orders of magnitude faster than Dijkstra’s algorithm. CH already has been extended to dynamic and time-dependent problem settings. However, these adaptations suffer from limitations. For example, the time-dependent variant of CH exhibits prohibitive memory consumption on large road networks with detailed traffic predictions. This thesis contains the following key contributions: First, we introduce CH-Potentials, an A*-based routing framework. CH-Potentials computes optimal distance estimates for A* using CH with a lower bound weight function derived at preprocessing time. The framework can be applied to any routing problem where appropriate lower bounds can be obtained. The achieved speedups range between one and three orders of magnitude over Dijkstra’s algorithm, depending on how tight the lower bounds are. Second, we propose several improvements to Customizable Contraction Hierarchies (CCH), the CH adaptation for dynamic route planning. Our improvements yield speedups of up to an order of magnitude. Further, we augment CCH to efficiently support essential extensions such as turn costs, alternative route computation and point-of-interest queries. Third, we present the first space-efficient, fast and exact speedup technique for time-dependent routing. Compared to the previous time-dependent variant of CH, our technique requires up to 40 times less memory, needs at most a third of the preprocessing time, and achieves only marginally slower query running times. Fourth, we generalize A* and introduce time-dependent A* potentials. This allows us to design the first approach for routing with combined live and predicted traffic, which achieves interactive running times for exact queries while allowing live traffic updates in a fraction of a minute. Fifth, we study extended problem models for routing with imperfect data and routing for truck drivers and present efficient algorithms for these variants. Sixth and finally, we present various complexity results for non-FIFO time-dependent routing and the extended problem models

    LIPIcs, Volume 244, ESA 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 244, ESA 2022, Complete Volum

    ECOS 2012

    Get PDF
    The 8-volume set contains the Proceedings of the 25th ECOS 2012 International Conference, Perugia, Italy, June 26th to June 29th, 2012. ECOS is an acronym for Efficiency, Cost, Optimization and Simulation (of energy conversion systems and processes), summarizing the topics covered in ECOS: Thermodynamics, Heat and Mass Transfer, Exergy and Second Law Analysis, Process Integration and Heat Exchanger Networks, Fluid Dynamics and Power Plant Components, Fuel Cells, Simulation of Energy Conversion Systems, Renewable Energies, Thermo-Economic Analysis and Optimisation, Combustion, Chemical Reactors, Carbon Capture and Sequestration, Building/Urban/Complex Energy Systems, Water Desalination and Use of Water Resources, Energy Systems- Environmental and Sustainability Issues, System Operation/ Control/Diagnosis and Prognosis, Industrial Ecology

    Scalable High-Quality Graph and Hypergraph Partitioning

    Get PDF
    The balanced hypergraph partitioning problem (HGP) asks for a partition of the node set of a hypergraph into kk blocks of roughly equal size, such that an objective function defined on the hyperedges is minimized. In this work, we optimize the connectivity metric, which is the most prominent objective function for HGP. The hypergraph partitioning problem is NP-hard and there exists no constant factor approximation. Thus, heuristic algorithms are used in practice with the multilevel scheme as the most successful approach to solve the problem: First, the input hypergraph is coarsened to obtain a hierarchy of successively smaller and structurally similar approximations. The smallest hypergraph is then initially partitioned into kk blocks, and subsequently, the contractions are reverted level-by-level, and, on each level, local search algorithms are used to improve the partition (refinement phase). In recent years, several new techniques were developed for sequential multilevel partitioning that substantially improved solution quality at the cost of an increased running time. These developments divide the landscape of existing partitioning algorithms into systems that either aim for speed or high solution quality with the former often being more than an order of magnitude faster than the latter. Due to the high running times of the best sequential algorithms, it is currently not feasible to partition the largest real-world hypergraphs with the highest possible quality. Thus, it becomes increasingly important to parallelize the techniques used in these algorithms. However, existing state-of-the-art parallel partitioners currently do not achieve the same solution quality as their sequential counterparts because they use comparatively weak components that are easier to parallelize. Moreover, there has been a recent trend toward simpler methods for partitioning large hypergraphs that even omit the multilevel scheme. In contrast to this development, we present two shared-memory multilevel hypergraph partitioners with parallel implementations of techniques used by the highest-quality sequential systems. Our first multilevel algorithm uses a parallel clustering-based coarsening scheme, which uses substantially fewer locking mechanisms than previous approaches. The contraction decisions are guided by the community structure of the input hypergraph obtained via a parallel community detection algorithm. For initial partitioning, we implement parallel multilevel recursive bipartitioning with a novel work-stealing approach and a portfolio of initial bipartitioning techniques to compute an initial solution. In the refinement phase, we use three different parallel improvement algorithms: label propagation refinement, a highly-localized direct kk-way FM algorithm, and a novel parallelization of flow-based refinement. These algorithms build on our highly-engineered partition data structure, for which we propose several novel techniques to compute accurate gain values of node moves in the parallel setting. Our second multilevel algorithm parallelizes the nn-level partitioning scheme used in the highest-quality sequential partitioner KaHyPar. Here, only a single node is contracted on each level, leading to a hierarchy with approximately nn levels where nn is the number of nodes. Correspondingly, in each refinement step, only a single node is uncontracted, allowing a highly-localized search for improvements. We show that this approach, which seems inherently sequential, can be parallelized efficiently without compromises in solution quality. To this end, we design a forest-based representation of contractions from which we derive a feasible parallel schedule of the contraction operations that we apply on a novel dynamic hypergraph data structure on-the-fly. In the uncoarsening phase, we decompose the contraction forest into batches, each containing a fixed number of nodes. We then uncontract each batch in parallel and use highly-localized versions of our refinement algorithms to improve the partition around the uncontracted nodes. We further show that existing sequential partitioning algorithms considerably struggle to find balanced partitions for weighted real-world hypergraphs. To this end, we present a technique that enables partitioners based on recursive bipartitioning to reliably compute balanced solutions. The idea is to preassign a small portion of the heaviest nodes to one of the two blocks of each bipartition and optimize the objective function on the remaining nodes. We integrated the approach into the sequential hypergraph partitioner KaHyPar and show that our new approach can compute balanced solutions for all tested instances without negatively affecting the solution quality and running time of KaHyPar. In our experimental evaluation, we compare our new shared-memory (hyper)graph partitioner Mt-KaHyPar to 2525 different graph and hypergraph partitioners on over 800800 (hyper)graphs with up to two billion edges/pins. The results indicate that already our fastest configuration outperforms almost all existing hypergraph partitioners with regards to both solution quality and running time. Our highest-quality configuration (nn-level with flow-based refinement) achieves the same solution quality as the currently best sequential partitioner KaHyPar, while being almost an order of magnitude faster with ten threads. In addition, we optimize our data structures for graph partitioning, which improves the running times of both multilevel partitioners by almost a factor of two for graphs. As a result, Mt-KaHyPar also outperforms most of the existing graph partitioning algorithms. While the shared-memory graph partitioner KaMinPar is still faster than Mt-KaHyPar, its produced solutions are worse by 10%10\% in the median. The best sequential graph partitioner KaFFPa-StrongS computes slightly better partitions than Mt-KaHyPar (median improvement is 1%1\%), but is more than an order of magnitude slower on average

    SIS 2017. Statistics and Data Science: new challenges, new generations

    Get PDF
    The 2017 SIS Conference aims to highlight the crucial role of the Statistics in Data Science. In this new domain of ‘meaning’ extracted from the data, the increasing amount of produced and available data in databases, nowadays, has brought new challenges. That involves different fields of statistics, machine learning, information and computer science, optimization, pattern recognition. These afford together a considerable contribute in the analysis of ‘Big data’, open data, relational and complex data, structured and no-structured. The interest is to collect the contributes which provide from the different domains of Statistics, in the high dimensional data quality validation, sampling extraction, dimensional reduction, pattern selection, data modelling, testing hypotheses and confirming conclusions drawn from the data
    • …
    corecore