87,054 research outputs found
A parallel computation approach for solving multistage stochastic network problems
The original publication is available at www.springerlink.comThis paper presents a parallel computation approach for the efficient solution of very
large multistage linear and nonlinear network problems with random parameters. These
problems result from particular instances of models for the robust optimization of network
problems with uncertainty in the values of the right-hand side and the objective function
coefficients. The methodology considered here models the uncertainty using scenarios to
characterize the random parameters. A scenario tree is generated and, through the use of
full-recourse techniques, an implementable solution is obtained for each group of scenarios
at each stage along the planning horizon.
As a consequence of the size of the resulting problems, and the special structure of their
constraints, these models are particularly well-suited for the application of decomposition
techniques, and the solution of the corresponding subproblems in a parallel computation
environment. An augmented Lagrangian decomposition algorithm has been implemented
on a distributed computation environment, and a static load balancing approach has been
chosen for the parallelization scheme, given the subproblem structure of the model. Large
problems â 9000 scenarios and 14 stages with a deterministic equivalent nonlinear model
having 166000 constraints and 230000 variables â are solved in 45 minutes on a cluster of
four small (11 Mflops) workstations. An extensive set of computational experiments is
reported; the numerical results and running times obtained for our test set, composed of
large-scale real-life problems, confirm the efficiency of this procedure.Publicad
A parallel computation approach for solving multistage stochastic network problems
This paper presents a parallel computation approach for the efficient solution of very large multistage linear and nonIinear network problems with random parameters. These problems resul t from particular instances of models for the robust optimization of network problems with uncertainty in the values of the right-hand side and the objective function coefficients. The methodology considered here models the uncertainty using scenarios to characterize the random parameters. A. scenario tree is generated and, through the use of full-recourse techniques, an implementable solution is obtained for each group of scenarios at each stage along the planning horizon. As a consequence of the size of the resulting problems, and the special structure of their constraints, these models are particularly well-suited for the application of decomposition techniques, and the solution of the corresponding subproblems in a parallel computation environment. An Augmented Lagrangian decomposition algorithm has been implemented on a distributed computation environment, and a static load balancing approach has been chosen for the parallelization scheme. given the subproblem structure of the model. Large problems -9000 scenarios and 14 stages with a deterministic equivalent nonlinear model having 166000 constraints and 230000 variables- are solved in 15 minutes on a cluster of 4 small (16 Mflops) workstations. An extensive set of computational experiments is reported; the numerical results and running times obtained for our test set, composed of large-scale real-life problems, confirm the efficiency of this procedure
Machine Learning for Fluid Mechanics
The field of fluid mechanics is rapidly advancing, driven by unprecedented
volumes of data from field measurements, experiments and large-scale
simulations at multiple spatiotemporal scales. Machine learning offers a wealth
of techniques to extract information from data that could be translated into
knowledge about the underlying fluid mechanics. Moreover, machine learning
algorithms can augment domain knowledge and automate tasks related to flow
control and optimization. This article presents an overview of past history,
current developments, and emerging opportunities of machine learning for fluid
mechanics. It outlines fundamental machine learning methodologies and discusses
their uses for understanding, modeling, optimizing, and controlling fluid
flows. The strengths and limitations of these methods are addressed from the
perspective of scientific inquiry that considers data as an inherent part of
modeling, experimentation, and simulation. Machine learning provides a powerful
information processing framework that can enrich, and possibly even transform,
current lines of fluid mechanics research and industrial applications.Comment: To appear in the Annual Reviews of Fluid Mechanics, 202
Convex Relaxations for Gas Expansion Planning
Expansion of natural gas networks is a critical process involving substantial
capital expenditures with complex decision-support requirements. Given the
non-convex nature of gas transmission constraints, global optimality and
infeasibility guarantees can only be offered by global optimisation approaches.
Unfortunately, state-of-the-art global optimisation solvers are unable to scale
up to real-world size instances. In this study, we present a convex
mixed-integer second-order cone relaxation for the gas expansion planning
problem under steady-state conditions. The underlying model offers tight lower
bounds with high computational efficiency. In addition, the optimal solution of
the relaxation can often be used to derive high-quality solutions to the
original problem, leading to provably tight optimality gaps and, in some cases,
global optimal soluutions. The convex relaxation is based on a few key ideas,
including the introduction of flux direction variables, exact McCormick
relaxations, on/off constraints, and integer cuts. Numerical experiments are
conducted on the traditional Belgian gas network, as well as other real larger
networks. The results demonstrate both the accuracy and computational speed of
the relaxation and its ability to produce high-quality solutions
Research and Education in Computational Science and Engineering
Over the past two decades the field of computational science and engineering
(CSE) has penetrated both basic and applied research in academia, industry, and
laboratories to advance discovery, optimize systems, support decision-makers,
and educate the scientific and engineering workforce. Informed by centuries of
theory and experiment, CSE performs computational experiments to answer
questions that neither theory nor experiment alone is equipped to answer. CSE
provides scientists and engineers of all persuasions with algorithmic
inventions and software systems that transcend disciplines and scales. Carried
on a wave of digital technology, CSE brings the power of parallelism to bear on
troves of data. Mathematics-based advanced computing has become a prevalent
means of discovery and innovation in essentially all areas of science,
engineering, technology, and society; and the CSE community is at the core of
this transformation. However, a combination of disruptive
developments---including the architectural complexity of extreme-scale
computing, the data revolution that engulfs the planet, and the specialization
required to follow the applications to new frontiers---is redefining the scope
and reach of the CSE endeavor. This report describes the rapid expansion of CSE
and the challenges to sustaining its bold advances. The report also presents
strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie
An Alternating Trust Region Algorithm for Distributed Linearly Constrained Nonlinear Programs, Application to the AC Optimal Power Flow
A novel trust region method for solving linearly constrained nonlinear
programs is presented. The proposed technique is amenable to a distributed
implementation, as its salient ingredient is an alternating projected gradient
sweep in place of the Cauchy point computation. It is proven that the algorithm
yields a sequence that globally converges to a critical point. As a result of
some changes to the standard trust region method, namely a proximal
regularisation of the trust region subproblem, it is shown that the local
convergence rate is linear with an arbitrarily small ratio. Thus, convergence
is locally almost superlinear, under standard regularity assumptions. The
proposed method is successfully applied to compute local solutions to
alternating current optimal power flow problems in transmission and
distribution networks. Moreover, the new mechanism for computing a Cauchy point
compares favourably against the standard projected search as for its activity
detection properties
- âŠ