73,021 research outputs found
Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy
[Abstract]
Background
The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times.
Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies.
Results
The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network.
The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used.
Conclusions
The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.Ministerio de EconomÃa y Competitividad; DPI2011-28112-C04-03Ministerio de EconomÃa y Competitividad; DPI2011-28112-C04-04Ministerio de EconomÃa y Competitividad; DPI2014-55276-C5-2-RMinisterio de EconomÃa y Competitividad; TIN2013-42148-PMinisterio de EconomÃa y Competitividad; TIN2016-75845-PGalicia. ConsellerÃa de Cultura, Educación e Ordenación Universitaria; R2014/041Galicia. ConsellerÃa de Cultura, Educación e Ordenación Universitaria; R2016/045Galicia. ConsellerÃa de Cultura, Educación e Ordenación Universitaria; GRC2013/05
Efficient Nonlinear Optimization with Rigorous Models for Large Scale Industrial Chemical Processes
Large scale nonlinear programming (NLP) has proven to be an effective framework
for obtaining profit gains through optimal process design and operations in
chemical engineering. While the classical SQP and Interior Point methods have been
successfully applied to solve many optimization problems, the focus of both academia
and industry on larger and more complicated problems requires further development
of numerical algorithms which can provide improved computational efficiency.
The primary purpose of this dissertation is to develop effective problem formulations
and an advanced numerical algorithms for efficient solution of these challenging
problems. As problem sizes increase, there is a need for tailored algorithms that
can exploit problem specific structure. Furthermore, computer chip manufacturers
are no longer focusing on increased clock-speeds, but rather on hyperthreading and
multi-core architectures. Therefore, to see continued performance improvement, we
must focus on algorithms that can exploit emerging parallel computing architectures.
In this dissertation, we develop an advanced parallel solution strategy for nonlinear
programming problems with block-angular structure. The effectiveness of this and
modern off-the-shelf tools are demonstrated on a wide range of problem classes.
Here, we treat optimal design, optimal operation, dynamic optimization, and
parameter estimation. Two case studies (air separation units and heat-integrated columns) are investigated to deal with design under uncertainty with rigorous models.
For optimal operation, this dissertation takes cryogenic air separation units as
a primary case study and focuses on formulations for handling uncertain product
demands, contractual constraints on customer satisfaction levels, and variable power
pricing. Multiperiod formulations provide operating plans that consider inventory to
meet customer demands and improve profits.
In the area of dynamic optimization, optimal reference trajectories are determined
for load changes in an air separation process. A multiscenario programming
formulation is again used, this time with large-scale discretized dynamic models.
Finally, to emphasize a different decomposition approach, we address a problem
with significant spatial complexity. Unknown water demands within a large scale
city-wide distribution network are estimated. This problem provides a different decomposition
mechanism than the multiscenario or multiperiod problems; nevertheless,
our parallel approach provides effective speedup
Research and Education in Computational Science and Engineering
Over the past two decades the field of computational science and engineering
(CSE) has penetrated both basic and applied research in academia, industry, and
laboratories to advance discovery, optimize systems, support decision-makers,
and educate the scientific and engineering workforce. Informed by centuries of
theory and experiment, CSE performs computational experiments to answer
questions that neither theory nor experiment alone is equipped to answer. CSE
provides scientists and engineers of all persuasions with algorithmic
inventions and software systems that transcend disciplines and scales. Carried
on a wave of digital technology, CSE brings the power of parallelism to bear on
troves of data. Mathematics-based advanced computing has become a prevalent
means of discovery and innovation in essentially all areas of science,
engineering, technology, and society; and the CSE community is at the core of
this transformation. However, a combination of disruptive
developments---including the architectural complexity of extreme-scale
computing, the data revolution that engulfs the planet, and the specialization
required to follow the applications to new frontiers---is redefining the scope
and reach of the CSE endeavor. This report describes the rapid expansion of CSE
and the challenges to sustaining its bold advances. The report also presents
strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie
Optimization Methods for Inverse Problems
Optimization plays an important role in solving many inverse problems.
Indeed, the task of inversion often either involves or is fully cast as a
solution of an optimization problem. In this light, the mere non-linear,
non-convex, and large-scale nature of many of these inversions gives rise to
some very challenging optimization problems. The inverse problem community has
long been developing various techniques for solving such optimization tasks.
However, other, seemingly disjoint communities, such as that of machine
learning, have developed, almost in parallel, interesting alternative methods
which might have stayed under the radar of the inverse problem community. In
this survey, we aim to change that. In doing so, we first discuss current
state-of-the-art optimization methods widely used in inverse problems. We then
survey recent related advances in addressing similar challenges in problems
faced by the machine learning community, and discuss their potential advantages
for solving inverse problems. By highlighting the similarities among the
optimization challenges faced by the inverse problem and the machine learning
communities, we hope that this survey can serve as a bridge in bringing
together these two communities and encourage cross fertilization of ideas.Comment: 13 page
Tensor Computation: A New Framework for High-Dimensional Problems in EDA
Many critical EDA problems suffer from the curse of dimensionality, i.e. the
very fast-scaling computational burden produced by large number of parameters
and/or unknown variables. This phenomenon may be caused by multiple spatial or
temporal factors (e.g. 3-D field solvers discretizations and multi-rate circuit
simulation), nonlinearity of devices and circuits, large number of design or
optimization parameters (e.g. full-chip routing/placement and circuit sizing),
or extensive process variations (e.g. variability/reliability analysis and
design for manufacturability). The computational challenges generated by such
high dimensional problems are generally hard to handle efficiently with
traditional EDA core algorithms that are based on matrix and vector
computation. This paper presents "tensor computation" as an alternative general
framework for the development of efficient EDA algorithms and tools. A tensor
is a high-dimensional generalization of a matrix and a vector, and is a natural
choice for both storing and solving efficiently high-dimensional EDA problems.
This paper gives a basic tutorial on tensors, demonstrates some recent examples
of EDA applications (e.g., nonlinear circuit modeling and high-dimensional
uncertainty quantification), and suggests further open EDA problems where the
use of tensor computation could be of advantage.Comment: 14 figures. Accepted by IEEE Trans. CAD of Integrated Circuits and
System
- …