24,173 research outputs found

    Algorithms for the continuous nonlinear resource allocation problem---new implementations and numerical studies

    Full text link
    Patriksson (2008) provided a then up-to-date survey on the continuous,separable, differentiable and convex resource allocation problem with a single resource constraint. Since the publication of that paper the interest in the problem has grown: several new applications have arisen where the problem at hand constitutes a subproblem, and several new algorithms have been developed for its efficient solution. This paper therefore serves three purposes. First, it provides an up-to-date extension of the survey of the literature of the field, complementing the survey in Patriksson (2008) with more then 20 books and articles. Second, it contributes improvements of some of these algorithms, in particular with an improvement of the pegging (that is, variable fixing) process in the relaxation algorithm, and an improved means to evaluate subsolutions. Third, it numerically evaluates several relaxation (primal) and breakpoint (dual) algorithms, incorporating a variety of pegging strategies, as well as a quasi-Newton method. Our conclusion is that our modification of the relaxation algorithm performs the best. At least for problem sizes up to 30 million variables the practical time complexity for the breakpoint and relaxation algorithms is linear

    Phase transitions in project scheduling.

    Get PDF
    The analysis of the complexity of combinatorial optimization problems has led to the distinction between problems which are solvable in a polynomially bounded amount of time (classified in P) and problems which are not (classified in NP). This implies that the problems in NP are hard to solve whereas the problems in P are not. However, this analysis is based on worst-case scenarios. The fact that a decision problem is shown to be NP-complete or the fact that an optimization problem is shown to be NP-hard implies that, in the worst case, solving it is very hard. Recent computational results obtained with a well known NP-hard problem, namely the resource-constrained project scheduling problem, indicate that many instances are actually easy to solve. These results are in line with those recently obtained by researchers in the area of artificial intelligence, which show that many NP-complete problemsexhibit so-called phase transitions, resulting in a sudden and dramatic change of computational complexity based on one or more order parameters that are characteristic of the system as a whole. In this paper we provide evidence for the existence of phase transitions in various resource-constrained project scheduling problems. We discuss the use of network complexity measures and resource parameters as potential order parameters. We show that while the network complexity measures seem to reveal continuous easy-hard or hard-easy phase-transitions, the resource parameters exhibit an easy-hard-easy transition behaviour.Networks; Problems; Scheduling; Algorithms;

    Sidescan Sonar Image Enchancement Using a Decomposition Based on Orthogonal Functions. Applications with Chebyshev Polynomials

    Get PDF
    A method is presented to remove from sidescan sonar images of the seafloor, artifacts that are clearly unrelated to the backscattering properties of the seafloor. A spectral analysis performed on a ping by ping basis proved to be well suited to the problem. The technique relies on a decomposition using Chebyshev polynomials. This stochastic method does not require a priori knowledge of deterministic parameters. It deals with the low spatial frequency components of the image whose wavelengths are not very small compared to the swath width. Applications to sidescan sonar images obtained with the SeaMARC LI system are presented

    Second order adjoints for solving PDE-constrained optimization problems

    Get PDF
    Inverse problems are of utmost importance in many fields of science and engineering. In the variational approach inverse problems are formulated as PDE-constrained optimization problems, where the optimal estimate of the uncertain parameters is the minimizer of a certain cost functional subject to the constraints posed by the model equations. The numerical solution of such optimization problems requires the computation of derivatives of the model output with respect to model parameters. The first order derivatives of a cost functional (defined on the model output) with respect to a large number of model parameters can be calculated efficiently through first order adjoint sensitivity analysis. Second order adjoint models give second derivative information in the form of matrix-vector products between the Hessian of the cost functional and user defined vectors. Traditionally, the construction of second order derivatives for large scale models has been considered too costly. Consequently, data assimilation applications employ optimization algorithms that use only first order derivative information, like nonlinear conjugate gradients and quasi-Newton methods. In this paper we discuss the mathematical foundations of second order adjoint sensitivity analysis and show that it provides an efficient approach to obtain Hessian-vector products. We study the benefits of using of second order information in the numerical optimization process for data assimilation applications. The numerical studies are performed in a twin experiment setting with a two-dimensional shallow water model. Different scenarios are considered with different discretization approaches, observation sets, and noise levels. Optimization algorithms that employ second order derivatives are tested against widely used methods that require only first order derivatives. Conclusions are drawn regarding the potential benefits and the limitations of using high-order information in large scale data assimilation problems
    • …
    corecore