283,698 research outputs found
Speeding-up Dynamic Programming with Representative Sets - An Experimental Evaluation of Algorithms for Steiner Tree on Tree Decompositions
Dynamic programming on tree decompositions is a frequently used approach to
solve otherwise intractable problems on instances of small treewidth. In recent
work by Bodlaender et al., it was shown that for many connectivity problems,
there exist algorithms that use time, linear in the number of vertices, and
single exponential in the width of the tree decomposition that is used. The
central idea is that it suffices to compute representative sets, and these can
be computed efficiently with help of Gaussian elimination.
In this paper, we give an experimental evaluation of this technique for the
Steiner Tree problem. A comparison of the classic dynamic programming algorithm
and the improved dynamic programming algorithm that employs the table reduction
shows that the new approach gives significant improvements on the running time
of the algorithm and the size of the tables computed by the dynamic programming
algorithm, and thus that the rank based approach from Bodlaender et al. does
not only give significant theoretical improvements but also is a viable
approach in a practical setting, and showcases the potential of exploiting the
idea of representative sets for speeding up dynamic programming algorithms
On optimal control policy for Probabilistic Boolean Network: a state reduction approach
BACKGROUND:
Probabilistic Boolean Network (PBN) is a popular model for studying genetic regulatory networks. An important and practical problem is to find the optimal control policy for a PBN so as to avoid the network from entering into undesirable states. A number of research works have been done by using dynamic programming-based (DP) method. However, due to the high computational complexity of PBNs, DP method is computationally inefficient for a large size network. Therefore it is natural to seek for approximation methods.
RESULTS:
Inspired by the state reduction strategies, we consider using dynamic programming in conjunction with state reduction approach to reduce the computational cost of the DP method. Numerical examples are given to demonstrate both the effectiveness and the efficiency of our proposed method.
CONCLUSIONS:
Finding the optimal control policy for PBNs is meaningful. The proposed problem has been shown to be ∑ p 2 - hard . By taking state reduction approach into consideration, the proposed method can speed up the computational time in applying dynamic programming-based algorithm. In particular, the proposed method is effective for larger size networks.published_or_final_versio
Dynamic programming algorithm for the vehicle routing problem with time windows and EC social legislation
In practice, apart from the problem of vehicle routing, schedulers also face the problem of nding feasible driver schedules complying with complex restrictions on drivers' driving and working hours. To address this complex interdependent problem of vehicle routing and break scheduling, we propose a dynamic programming approach for the vehicle routing problem with time windows including the EC social legislation on drivers' driving and working hours. Our algorithm includes all optional rules in these legislations, which are generally ignored in the literature. To include the legislation in the dynamic programming algorithm we propose a break scheduling method that does not increase the time-complexity of the algorithm. This is a remarkable eect that generally does not hold for local search methods, which have proved to be very successful in solving less restricted vehicle routing problems. Computational results show that our method finds solutions to benchmark instances with 18% less vehicles and 5% less travel distance than state of the art approaches. Furthermore, they show that including all optional rules of the legislation leads to an additional reduction of 4% in the number of vehicles and of 1.5%\ud
regarding the travel distance. Therefore, the optional rules should be exploited in practice
An improving dynamic programming algorithm to solve the shortest path problem with time windows
International audienceAn efficient use of dynamic programming requires a substantial reduction of the number of labels. We propose in this paper an efficient way of reducing the number of labels saved and dominance computing time. Our approach is validated by experiments on shortest path problem with time windows instances
A HJB-POD approach for the control of nonlinear PDEs on a tree structure
The Dynamic Programming approach allows to compute a feedback control for
nonlinear problems, but suffers from the curse of dimensionality. The
computation of the control relies on the resolution of a nonlinear PDE, the
Hamilton-Jacobi-Bellman equation, with the same dimension of the original
problem. Recently, a new numerical method to compute the value function on a
tree structure has been introduced. The method allows to work without a
structured grid and avoids any interpolation.
Here, we aim to test the algorithm for nonlinear two dimensional PDEs. We
apply model order reduction to decrease the computational complexity since the
tree structure algorithm requires to solve many PDEs. Furthermore, we prove an
error estimate which guarantees the convergence of the proposed method.
Finally, we show efficiency of the method through numerical tests
Risk and optimal policies in bandit experiments
This paper provides a decision theoretic analysis of bandit experiments. The
bandit setting corresponds to a dynamic programming problem, but solving this
directly is typically infeasible. Working within the framework of diffusion
asymptotics, we define a suitable notion of asymptotic Bayes risk for bandit
settings. For normally distributed rewards, the minimal Bayes risk can be
characterized as the solution to a nonlinear second-order partial differential
equation (PDE). Using a limit of experiments approach, we show that this PDE
characterization also holds asymptotically under both parametric and
non-parametric distribution of the rewards. The approach further describes the
state variables it is asymptotically sufficient to restrict attention to, and
therefore suggests a practical strategy for dimension reduction. The upshot is
that we can approximate the dynamic programming problem defining the bandit
setting with a PDE which can be efficiently solved using sparse matrix
routines. We derive near-optimal policies from the numerical solutions to these
equations. The proposed policies substantially dominate existing methods such
Thompson sampling. The framework also allows for substantial generalizations to
the bandit problem such as time discounting and pure exploration motives
- …