5,987 research outputs found
Second order conic approximation for disassembly line design with joint probabilistic constraints
A problem of profit oriented disassembly line design and balancing with possible partial disassembly and presence of hazardous parts is studied. The objective is to design a production line providing a maximal revenue with balanced workload. Task times are assumed to be random variables with known normal probability distributions. The cycle time constraints are to be jointly satisfied with at least a predetermined probability level. An AND/OR graph is used to model the precedence relationships among tasks. Several lower and upper–bounding schemes are developed using second order cone programming and convex piecewise linear approximation. To show the relevance and applicability of the proposed approach, a set of instances from the literature are solved to optimality
Coexistence versus extinction in the stochastic cyclic Lotka-Volterra model
Cyclic dominance of species has been identified as a potential mechanism to
maintain biodiversity, see e.g. B. Kerr, M. A. Riley, M. W. Feldman and B. J.
M. Bohannan [Nature {\bf 418}, 171 (2002)] and B. Kirkup and M. A. Riley
[Nature {\bf 428}, 412 (2004)]. Through analytical methods supported by
numerical simulations, we address this issue by studying the properties of a
paradigmatic non-spatial three-species stochastic system, namely the
`rock-paper-scissors' or cyclic Lotka-Volterra model. While the deterministic
approach (rate equations) predicts the coexistence of the species resulting in
regular (yet neutrally stable) oscillations of the population densities, we
demonstrate that fluctuations arising in the system with a \emph{finite number
of agents} drastically alter this picture and are responsible for extinction:
After long enough time, two of the three species die out. As main findings we
provide analytic estimates and numerical computation of the extinction
probability at a given time. We also discuss the implications of our results
for a broad class of competing population systems.Comment: 12 pages, 9 figures, minor correction
Decomposition techniques for large scale stochastic linear programs
Stochastic linear programming is an effective and often used technique for incorporating uncertainties about future events into decision making processes. Stochastic linear programs tend to be significantly larger than other types of linear programs and generally require sophisticated decomposition solution procedures. Detailed algorithms based uponDantzig-Wolfe and L-Shaped decomposition are developed and implemented. These algorithms allow for solutions to within an arbitrary tolerance on the gap between the lower and upper bounds on a problem\u27s objective function value. Special procedures and implementation strategies are presented that enable many multi-period stochastic linear programs to be solved with two-stage, instead of nested, decomposition techniques. Consequently, abroad class of large scale problems, with tens of millions of constraints and variables, can be solved on a personal computer. Myopic decomposition algorithms based upon a shortsighted view of the future are also developed. Although unable to guarantee an arbitrary solution tolerance, myopic decomposition algorithms may yield very good solutions in a fraction of the time required by Dantzig-Wolfe/L-Shaped decomposition based algorithms.In addition, derivations are given for statistics, based upon Mahalanobis squared distances,that can be used to provide measures for a random sample\u27s effectiveness in approximating a parent distribution. Results and analyses are provided for the applications of the decomposition procedures and sample effectiveness measures to a multi-period market investment model
Signatures of arithmetic simplicity in metabolic network architecture
Metabolic networks perform some of the most fundamental functions in living
cells, including energy transduction and building block biosynthesis. While
these are the best characterized networks in living systems, understanding
their evolutionary history and complex wiring constitutes one of the most
fascinating open questions in biology, intimately related to the enigma of
life's origin itself. Is the evolution of metabolism subject to general
principles, beyond the unpredictable accumulation of multiple historical
accidents? Here we search for such principles by applying to an artificial
chemical universe some of the methodologies developed for the study of genome
scale models of cellular metabolism. In particular, we use metabolic flux
constraint-based models to exhaustively search for artificial chemistry
pathways that can optimally perform an array of elementary metabolic functions.
Despite the simplicity of the model employed, we find that the ensuing pathways
display a surprisingly rich set of properties, including the existence of
autocatalytic cycles and hierarchical modules, the appearance of universally
preferable metabolites and reactions, and a logarithmic trend of pathway length
as a function of input/output molecule size. Some of these properties can be
derived analytically, borrowing methods previously used in cryptography. In
addition, by mapping biochemical networks onto a simplified carbon atom
reaction backbone, we find that several of the properties predicted by the
artificial chemistry model hold for real metabolic networks. These findings
suggest that optimality principles and arithmetic simplicity might lie beneath
some aspects of biochemical complexity
- …