105,955 research outputs found
A Solution Algorithm for Interval Transportation Problems via Time-Cost Tradeoff
In this paper, an algorithm for solving interval time-cost tradeoff transportation problemsis presented. In this problem, all the demands are defined as intervalto determine more realistic duration and cost. Mathematical methods can be used to convert the time-cost tradeoff problems to linear programming, integer programming, dynamic programming, goal programming or multi-objective linear programming problems for determining the optimum duration and cost. Using this approach, the algorithm is developed converting interval time-cost tradeoff transportation problem to the linear programming problem by taking into consideration of decision maker (DM)
Asymptotic Optimality of a Time Optimal Path Parametrization Algorithm
Time Optimal Path Parametrization is the problem of minimizing the time
interval during which an actuation constrained agent can traverse a given path.
Recently, an efficient linear-time algorithm for solving this problem was
proposed. However, its optimality was proved for only a strict subclass of
problems solved optimally by more computationally intensive approaches based on
convex programming. In this paper, we prove that the same linear-time algorithm
is asymptotically optimal for all problems solved optimally by convex
optimization approaches. We also characterize the optimum of the Time Optimal
Path Parametrization Problem, which may be of independent interest
A linear programming based heuristic framework for min-max regret combinatorial optimization problems with interval costs
This work deals with a class of problems under interval data uncertainty,
namely interval robust-hard problems, composed of interval data min-max regret
generalizations of classical NP-hard combinatorial problems modeled as 0-1
integer linear programming problems. These problems are more challenging than
other interval data min-max regret problems, as solely computing the cost of
any feasible solution requires solving an instance of an NP-hard problem. The
state-of-the-art exact algorithms in the literature are based on the generation
of a possibly exponential number of cuts. As each cut separation involves the
resolution of an NP-hard classical optimization problem, the size of the
instances that can be solved efficiently is relatively small. To smooth this
issue, we present a modeling technique for interval robust-hard problems in the
context of a heuristic framework. The heuristic obtains feasible solutions by
exploring dual information of a linearly relaxed model associated with the
classical optimization problem counterpart. Computational experiments for
interval data min-max regret versions of the restricted shortest path problem
and the set covering problem show that our heuristic is able to find optimal or
near-optimal solutions and also improves the primal bounds obtained by a
state-of-the-art exact algorithm and a 2-approximation procedure for interval
data min-max regret problems
An Evolutionary Algorithm Using Duality-Base-Enumerating Scheme for Interval Linear Bilevel Programming Problems
Interval bilevel programming problem is hard to solve due to its hierarchical structure as well as the uncertainty of coefficients. This paper is focused on a class of interval linear bilevel programming problems, and an evolutionary algorithm based on duality bases is proposed. Firstly, the objective coefficients of the lower level and the right-hand-side vector are uniformly encoded as individuals, and the relative intervals are taken as the search space. Secondly, for each encoded individual, based on the duality theorem, the original problem is transformed into a single level program simply involving one nonlinear equality constraint. Further, by enumerating duality bases, this nonlinear equality is deleted, and the single level program is converted into several linear programs. Finally, each individual can be evaluated by solving these linear programs. The computational results of 7 examples show that the algorithm is feasible and robust
Multi‐objective linear programming with interval coefficients
Purpose
The purpose of this paper is to extend a methodology for solving multi‐objective linear programming (MOLP) problems, when the objective functions and constraints coefficients are stated as interval numbers.
Design/methodology/approach
The approach proposed in this paper for the considered problem is based on the maximization of the sum of membership degrees which are defined for each objective of multi objective problem. These membership degrees are constructed based on the deviation from optimal solutions of individual objectives. Then, the final model based on membership degrees is itself an interval linear programming which can be solved by current methods.
Findings
The efficiency of the solutions obtained by the proposed method is proved. It is shown that the obtained solution by the proposed method for an interval multi objective problem is Pareto optimal.
Research limitations/implications
The proposed method can be used in modeling and analyzing of uncertain systems which are modeled in the context of multi objective problems and in which required information is ill defined.
Originality/value
The paper proposed a novel and well‐defined algorithm to solve the considered problem
Linear programming algorithms for lower previsions
The thesis begins with a brief summary of linear programming, three methods for solving linear programs (the simplex, the affine scaling and the primal-dual methods)
and a brief review of desirability and lower previsions. The first contribution is to improve these algorithms for efficiently solving these linear programming problems
for checking avoiding sure loss. To exploit these linear programs, I can reduce their size and propose novel improvements, namely, extra stopping criteria and direct
ways to calculate feasible starting points in almost all cases. To benchmark the improvements, I present algorithms for generating random sets of desirable gambles
that either avoid or do not avoid sure loss.
Overall, the affine scaling and primal-dual methods benefit from the improvements, and they both outperform the simplex method in most scenarios. Hence, I conclude that the simplex method is not a good choice for checking avoiding sure loss. If problems are small, then there is no tangible difference in performance between all methods. For large problems, the improved primal-dual method performs
at least three times faster than any of the other methods.
The second contribution is to study checking avoiding sure loss for sets of desirable gambles derived from betting odds. Specifically, in the UK betting market, bookmakers usually provide odds and give a free coupon, which can be spent on betting, to customers who first bet with them. I investigate whether a customer can exploit these odds and the free coupon in order to make a sure gain, and if that
is possible, how can that be achieved. To answer this question, I view these odds and the free coupon as a set of desirable gambles and present an algorithm to check
whether and how such a set incurs sure loss. I show that the Choquet integral and complementary slackness can be used to answer these questions. This can inform
the customers how much should be placed on each bet in order to make a sure gain.
As an illustration, I show an example using actual betting odds in the market where all sets of desirable gambles derived from those odds avoid sure loss. However,
with a free coupon, there are some combinations of bets that the customers could place in order to make a guaranteed gain.
I also consider maximality which is a criterion for decision making under uncertainty, using lower previsions. I study two existing algorithms, one proposed by Troffaes and Hable (2014), and one by Jansen, Augustin, and Schollmeyer (2017). For the last contribution in the thesis, I present a new algorithm for finding max-
imal gambles and provide a new method for generating random decision problems to benchmark these algorithms on generated sets.
To find all maximal gambles, Jansen et al. solve one large linear program for each gamble, while in Troffaes and Hable, and also in our new algorithm, this can be
done by solving a larger sequence of smaller linear programs. For the second case, I apply efficient ways to find a common feasible starting point for this sequence of
linear programs from the first contribution. Exploiting these feasible starting points, I propose early stopping criteria for further improving efficiency for the primal-dual method.
For benchmarking, we can generate sets of gambles with pre-specified ratios of maximal and interval dominant gambles. I investigate the use of interval dominance
at the beginning to eliminate non-maximal gambles. I find that this can make the problem smaller and benefits Jansen et al.’s algorithm, but perhaps surprisingly, not the other two algorithms. We find that our algorithm, without using interval dominance, outperforms all other algorithms in all scenarios in our benchmarking
Improving Strategies via SMT Solving
We consider the problem of computing numerical invariants of programs by
abstract interpretation. Our method eschews two traditional sources of
imprecision: (i) the use of widening operators for enforcing convergence within
a finite number of iterations (ii) the use of merge operations (often, convex
hulls) at the merge points of the control flow graph. It instead computes the
least inductive invariant expressible in the domain at a restricted set of
program points, and analyzes the rest of the code en bloc. We emphasize that we
compute this inductive invariant precisely. For that we extend the strategy
improvement algorithm of [Gawlitza and Seidl, 2007]. If we applied their method
directly, we would have to solve an exponentially sized system of abstract
semantic equations, resulting in memory exhaustion. Instead, we keep the system
implicit and discover strategy improvements using SAT modulo real linear
arithmetic (SMT). For evaluating strategies we use linear programming. Our
algorithm has low polynomial space complexity and performs for contrived
examples in the worst case exponentially many strategy improvement steps; this
is unsurprising, since we show that the associated abstract reachability
problem is Pi-p-2-complete
- …