85 research outputs found

    Database query optimisation based on measures of regret

    Get PDF
    The query optimiser in a database management system (DBMS) is responsible for �nding a good order in which to execute the operators in a given query. However, in practice the query optimiser does not usually guarantee to �nd the best plan. This is often due to the non-availability of precise statistical data or inaccurate assumptions made by the optimiser. In this thesis we propose a robust approach to logical query optimisation that takes into account the unreliability in database statistics during the optimisation process. In particular, we study the ordering problem for selection operators and for join operators, where selectivities are modelled as intervals rather than exact values. As a measure of optimality, we use a concept from decision theory called minmax regret optimisation (MRO). When using interval selectivities, the decision problem for selection operator ordering turns out to be NP-hard. After investigating properties of the problem and identifying special cases which can be solved in polynomial time, we develop a novel heuristic for solving the general selection ordering problem in polynomial time. Experimental evaluation of the heuristic using synthetic data, the Star Schema Benchmark and real-world data sets shows that it outperforms other heuristics (which take an optimistic, pessimistic or midpoint approach) and also produces plans whose regret is on average very close to optimal. The general join ordering problem is known to be NP-hard, even for exact selectivities. So, for interval selectivities, we restrict our investigation to sets of join operators which form a chain and to plans that correspond to left-deep join trees. We investigate properties of the problem and use these, along with ideas from the selection ordering heuristic and other algorithms in the literature, to develop a polynomial-time heuristic tailored for the join ordering problem. Experimental evaluation of the heuristic shows that, once again, it performs better than the optimistic, pessimistic and midpoint heuristics. In addition, the results show that the heuristic produces plans whose regret is on average even closer to the optimal than for selection ordering

    Database query optimisation based on measures of regret

    Get PDF
    The query optimiser in a database management system (DBMS) is responsible for �nding a good order in which to execute the operators in a given query. However, in practice the query optimiser does not usually guarantee to �nd the best plan. This is often due to the non-availability of precise statistical data or inaccurate assumptions made by the optimiser. In this thesis we propose a robust approach to logical query optimisation that takes into account the unreliability in database statistics during the optimisation process. In particular, we study the ordering problem for selection operators and for join operators, where selectivities are modelled as intervals rather than exact values. As a measure of optimality, we use a concept from decision theory called minmax regret optimisation (MRO). When using interval selectivities, the decision problem for selection operator ordering turns out to be NP-hard. After investigating properties of the problem and identifying special cases which can be solved in polynomial time, we develop a novel heuristic for solving the general selection ordering problem in polynomial time. Experimental evaluation of the heuristic using synthetic data, the Star Schema Benchmark and real-world data sets shows that it outperforms other heuristics (which take an optimistic, pessimistic or midpoint approach) and also produces plans whose regret is on average very close to optimal. The general join ordering problem is known to be NP-hard, even for exact selectivities. So, for interval selectivities, we restrict our investigation to sets of join operators which form a chain and to plans that correspond to left-deep join trees. We investigate properties of the problem and use these, along with ideas from the selection ordering heuristic and other algorithms in the literature, to develop a polynomial-time heuristic tailored for the join ordering problem. Experimental evaluation of the heuristic shows that, once again, it performs better than the optimistic, pessimistic and midpoint heuristics. In addition, the results show that the heuristic produces plans whose regret is on average even closer to the optimal than for selection ordering

    A lexicographic minimax approach to the vehicle routing problem with route balancing

    Get PDF
    International audienceVehicle routing problems generally aim at designing routes that minimize transportation costs. However, in practical settings, many companies also pay attention at how the workload is distributed among its drivers. Accordingly, two main approaches for balancing the workload have been proposed in the literature. They are based on minimizing the duration of the longest route, or the difference between the longest and the shortest routes, respectively. Recently, it has been shown on several occasions that both approaches have some flaws. In order to model equity we investigate the lexicographic minimax approach, which is rooted in social choice theory. We define the leximax vehicle routing problem which considers the bi-objective optimization of transportation costs and of workload balancing. This problem is solved by a heuristic based on the multi-directional local search framework. It involves dedicated large neighborhood search operators. Several LNS operators are proposed and compared in experimentations

    Robust job-sequencing with an uncertain flexible maintenance activity

    Get PDF
    In this study, the problem of scheduling a set of jobs and one uncertain maintenance activity on a single machine, with the objective of minimizing the makespan is addressed. The maintenance activity has a given duration and must be executed within a given time window. Furthermore, duration and time window of the maintenance are uncertain, and can take different values which can be described by different scenarios. The problem is to determine a job sequence which performs well, in terms of makespan, independently on the possible variation of the data concerning the maintenance. A robust scheduling approach is used for the problem, in which four different measures of robustness are considered, namely, maximum absolute regret, maximum relative regret, worst-case scenario, and ordered weighted averaging. Complexity and approximation results are presented. In particular, we show that, for all the four robustness criteria, the problem is strongly NP-hard. A number of special cases are explored, and an exact pseudopolynomial algorithm based on dynamic programming is devised when the number of scenarios is fixed. Two Mixed Integer Programming (MIP) models are also presented for the general problem. Several computational experiments have been conducted to evaluate the efficiency and effectiveness of the MIP models and of the dynamic programming approach

    A Free Exchange e-Marketplace for Digital Services

    Get PDF
    The digital era is witnessing a remarkable evolution of digital services. While the prospects are countless, the e-marketplaces of digital services are encountering inherent game-theoretic and computational challenges that restrict the rational choices of bidders. Our work examines the limited bidding scope and the inefficiencies of present exchange e-marketplaces. To meet challenges, a free exchange e-marketplace is proposed that follows the free market economy. The free exchange model includes a new bidding language and a double auction mechanism. The rule-based bidding language enables the flexible expression of preferences and strategic conduct. The bidding message holds the attribute-valuations and bidding rules of the selected services. The free exchange deliberates on attributes and logical bidding rules for automatic deduction and formation of elicited services and bids that result in a more rapid self-managed multiple exchange trades. The double auction uses forward and reverse generalized second price auctions for the symmetric matching of multiple digital services of identical attributes and different quality levels. The proposed double auction uses tractable heuristics that secure exchange profitability, improve truthful bidding and deliver stable social efficiency. While the strongest properties of symmetric exchanges are unfeasible game-theoretically, the free exchange converges rapidly to the social efficiency, Nash truthful stability, and weak budget balance by multiple quality-levels cross-matching, constant learning and informs at repetitive thick trades. The empirical findings validate the soundness and viability of the free exchange

    Computable Analysis and Game Theory: From Foundations to Applications

    Get PDF
    This body of research showcases several facets of the intersection between computer science and game theory. On the foundational side, we explore the obstructions to the computability of Nash equilibria in the setting of computable analysis. In particular, we study the Weihrauch degree of the problem of finding a Nash equilibrium for a multiplayer game in normal form. We conclude that the Weihrauch degree Nash for multiplayer games lies between AoUC∗[0,1] and AoUC⋄[0,1] (Theorem 5.3). As a slight detour, we also explore the demarcation between computable and non-computable computational problems pertaining to the verification of machine learning. We demonstrate that many verification questions are computable without the need to specify a machine learning framework (Section 7.2). As well as looking into the theory of learners, robustness and sparisty of training data. On the application side, we study the use of Hypergames in Cybersecurity. We look into cybersecurity AND/OR attack graphs and how we could turn them into a hypergame (8.1). Hyper Nash equilibria is not an ideal solution for these games, however, we propose a regret-minimisation based solution concept. In Section 8.2, we survey the area of Hypergames and their connection to cybersecurity, showing that even if there is a small overlap, the reach is limited. We suggest new research directions such as adaptive games, generalisation and transferability (Section 8.3)

    Algorithms for Scheduling Problems

    Get PDF
    This edited book presents new results in the area of algorithm development for different types of scheduling problems. In eleven chapters, algorithms for single machine problems, flow-shop and job-shop scheduling problems (including their hybrid (flexible) variants), the resource-constrained project scheduling problem, scheduling problems in complex manufacturing systems and supply chains, and workflow scheduling problems are given. The chapters address such subjects as insertion heuristics for energy-efficient scheduling, the re-scheduling of train traffic in real time, control algorithms for short-term scheduling in manufacturing systems, bi-objective optimization of tortilla production, scheduling problems with uncertain (interval) processing times, workflow scheduling for digital signal processor (DSP) clusters, and many more

    Flood Management in a Complex River Basin with a Real-Time Decision Support System Based on Hydrological Forecasts

    Get PDF
    During the last decades, the Upper Rhone River basin has been hit by several flood events causing significant damages in excess of 500 million Swiss Francs. From this situation, the 3rd Rhône river training project was planned in order to improve the flood protection in the Upper Rhone River basin in Vaud and Valais Cantons. In this framework, the MINERVE forecast system aims to contribute to a better flow control during flood events in this catchment area, taking advantage of the existing hydropower multi-reservoir network. This system also fits into the OWARNA national project of the Swiss Federal Office of Environment by establishing a national platform on natural hazards alarms. The Upper Rhone River basin has a catchment area with high mountains and large glaciers. The surface of the basin is 5521 km2 and its elevation varies between 400 and 4634 m a.s.l. Numerous hydropower schemes with large dams and reservoirs are located in the catchment area, influencing the hydrological regime. Their impact during floods can be significant as appropriate preventive operations can decrease the peak discharges in the Rhone River and its main tributaries, thus reducing the damages. The MINERVE forecast system exploits flow measurements, data from reservoirs and hydropower plants as well as probabilistic (COSMO-LEPS) and deterministic (COSMO-2 and COSMO-7) numerical weather predictions from MeteoSwiss. The MINERVE hydrological model of the catchment area follows a semi-distributed approach. The basin is split into 239 sub-catchments which are further sub-divided into 500 m elevation bands, for a total of 1050 bands. For each elevation band, precipitation, temperature and potential evapotranspiration are calculated. They are considered in order to describe the temperature-driven processes accurately, such as snow and glaciers melt. The hydrological model was implemented in the Routing System software. The object oriented programming environment allows a user-friendly modelling of the hydrological, hydraulic and operating processes. Numerical meteorological data (observed or predicted) are introduced as input in the model. Over the calibration and validation periods of the model, only observed data (precipitation, temperature and flows) was used. For operational flood forecast, the observed measurements are used to update the initial conditions of the hydrological model and the weather forecasts for the hydrological simulations. Routing System provides then hydrological predictions in the whole catchment area. Subsequently, a warning system was developed especially for the basin to provide a flood warning report. The warning system predicts the evolution of the hydrological situation at selected main check points in the catchment area. It displays three warning levels during a flood event depending on respective critical discharge thresholds. Furthermore, the multi-reservoir system is managed in an optimal way in order to limit or avoid damages during floods. A decision support tool called MINDS (MINERVE Interactive Decision Support System) has been developed for real-time decision making based on the hydrological forecasts. This tool defines preventive operation measures for the hydropower plants such as turbine and bottom outlet releases able to provide an optimal water storage during the flood peak. The overall goal of MINDS is then to retain the inflowing floods in reservoirs and to avoid spillway and turbine operations during the peak flow, taking into account all restrictions and current conditions of the network. Such a reservoir management system can therefore significantly decrease flood damages in the catchment area. The reservoir management optimisation during floods is achieved with deterministic and probabilistic forecasts. The definition of the objective function to optimise is realised with a multi-attribute decision making approach. Then, the optimisation is performed with an iterative Greedy algorithm or a SCE-UA (Shuffled Complex Evolution – University of Arizona) algorithm. The developed decision support system combines the high-quality optimisation system with its user-friendly interface. The purpose is to help decision makers by being directly involve in main steps of the decision making process as well as by understanding the measures undertaken and their consequences

    Multicriteria methodologies for the appraisal of smart grid projects when flexibility competes with grid expansion

    Get PDF
    The severe consequences expected due to the increased frequency and intensity of extreme weather events call for improving the environmental sustainability of our society. The electricity sector is pivotal in the path toward a climate-neutral society. Nowadays, the massive use of renewable energy sources requires that electricity demand follows energy production. Demand has to be flexible, as well as the renewable generation and the grid infrastructures. The power system has to assume a decentralised structure and integrate the transportation and cooling and heating sectors. All customers connected to the electrical grid have to contribute to the power system management and participate in the related markets. The power system has to become smart; all technical and market processes have to be digitalised to enable new functionalities and services. The power system transformation requires rethinking planning and operation practices to accommodate the changes and take advantage of the related opportunities. The novel features and services available in the active and flexible power system will influence the customers' daily habits; therefore, the impacts generated by planning initiatives will cross the power system borders by impacting society as a whole. Since the power system will be operated closer to its technical limits, it is crucial to enhance the management of uncertainties by the increased accuracy of load and generation forecast. This thesis addresses the ongoing power system transformation by focusing on the distribution system, which will face unprecedented changes. This thesis concerns novel approaches for appraising the project initiatives based on the use of the users' flexibility connected to the grid. Traditional appraisal tools are no longer effective; therefore, decision-makers have to be supported with tools capable of capturing the complexity of the future power system in which flexibility measures compete with grid expansion. In this thesis, an assessment framework for smart grid initiatives which combines the cost-benefit analysis and the multi-criteria analysis proposed. Based on international guidelines, this framework allows for a systematic and simultaneous assessment of tangible and the intangible impacts considering conflicting criteria. To complete the assessment framework, a novel methodology which combines Regret Theory and multi-criteria analysis is proposed. The proposed methodology represents one of the main contributions of this dissertation. It supports the decision-maker to identify the most valuable option by decomposing the complex decision-making problem of smart grid planning and rejecting personal biases by avoiding the need for defining the evaluation criteria relevance. However, the stakeholders’ perspective can be included in terms of constraints for the minimax optimisation problem. In conclusion, the contribution of the thesis is to provide decision-making support tools for strategical power system planning. The research activities described in this document have been aimed at supporting system operators and regulatory bodies by providing tools for smart grid project appraisal and improving the accuracy of power system studies considering the novel context features
    corecore