1,128,055 research outputs found
An optimal-control based integrated model of supply chain
Problems of supply chain scheduling are challenged by high complexity, combination of continuous and discrete processes, integrated production and transportation operations as well as dynamics and resulting requirements for adaptability and stability analysis. A possibility to address the above-named issues opens modern control theory and optimal program control in particular. Based on a combination of fundamental results of modern optimal program control theory and operations research, an original approach to supply chain scheduling is developed in order to answer the challenges of complexity, dynamics, uncertainty, and adaptivity. Supply chain schedule generation is represented as an optimal program control problem in combination with mathematical programming and interpreted as a dynamic process of operations control within an adaptive framework. The calculation procedure is based on applying Pontryagin’s maximum principle and the resulting essential reduction of problem dimensionality that is under solution at each instant of time. With the developed model, important categories of supply chain analysis such as stability and adaptability can be taken into consideration. Besides, the dimensionality of operations research-based problems can be relieved with the help of distributing model elements between an operations research (static aspects) and a control (dynamic aspects) model. In addition, operations control and flow control models are integrated and applicable for both discrete and continuous processes.supply chain, model of supply chain scheduling, optimal program control theory, Pontryagin’s maximum principle, operations research model,
Development of an unsteady aerodynamics model to improve correlation of computed blade stresses with test data
A reliable rotor aeroelastic analysis operational that correctly predicts the vibration levels for a helicopter is utilized to test various unsteady aerodynamics models with the objective of improving the correlation between test and theory. This analysis called Rotor Aeroelastic Vibration (RAVIB) computer program is based on a frequency domain forced response analysis which utilizes the transfer matrix techniques to model helicopter/rotor dynamic systems of varying degrees of complexity. The results for the AH-1G helicopter rotor were compared with the flight test data during high speed operation and they indicated a reasonably good correlation for the beamwise and chordwise blade bending moments, but for torsional moments the correlation was poor. As a result, a new aerodynamics model based on unstalled synthesized data derived from the large amplitude oscillating airfoil experiments was developed and tested
A new implementation of the programming system for structural synthesis (PROSSS-2)
This new implementation of the PROgramming System for Structural Synthesis (PROSSS-2) combines a general-purpose finite element computer program for structural analysis, a state-of-the-art optimization program, and several user-supplied, problem-dependent computer programs. The results are flexibility of the optimization procedure, organization, and versatility of the formulation of constraints and design variables. The analysis-optimization process results in a minimized objective function, typically the mass. The analysis and optimization programs are executed repeatedly by looping through the system until the process is stopped by a user-defined termination criterion. However, some of the analysis, such as model definition, need only be one time and the results are saved for future use. The user must write some small, simple FORTRAN programs to interface between the analysis and optimization programs. One of these programs, the front processor, converts the design variables output from the optimizer into the suitable format for input into the analyzer. Another, the end processor, retrieves the behavior variables and, optionally, their gradients from the analysis program and evaluates the objective function and constraints and optionally their gradients. These quantities are output in a format suitable for input into the optimizer. These user-supplied programs are problem-dependent because they depend primarily upon which finite elements are being used in the model. PROSSS-2 differs from the original PROSSS in that the optimizer and front and end processors have been integrated into the finite element computer program. This was done to reduce the complexity and increase portability of the system, and to take advantage of the data handling features found in the finite element program
A Simple and Scalable Static Analysis for Bound Analysis and Amortized Complexity Analysis
We present the first scalable bound analysis that achieves amortized
complexity analysis. In contrast to earlier work, our bound analysis is not
based on general purpose reasoners such as abstract interpreters, software
model checkers or computer algebra tools. Rather, we derive bounds directly
from abstract program models, which we obtain from programs by comparatively
simple invariant generation and symbolic execution techniques. As a result, we
obtain an analysis that is more predictable and more scalable than earlier
approaches. Our experiments demonstrate that our analysis is fast and at the
same time able to compute bounds for challenging loops in a large real-world
benchmark. Technically, our approach is based on lossy vector addition systems
(VASS). Our bound analysis first computes a lexicographic ranking function that
proves the termination of a VASS, and then derives a bound from this ranking
function. Our methodology achieves amortized analysis based on a new insight
how lexicographic ranking functions can be used for bound analysis
Analysis of Climate Policy Targets under Uncertainty
Abstract and PDF report are also available on the MIT Joint Program on the Science and Policy of Global Change website (http://globalchange.mit.edu/).Although policymaking in response to the climate change is essentially a challenge of risk management, most studies of the relation of emissions targets to desired climate outcomes are either deterministic or subject to a limited representation of the underlying uncertainties. Monte Carlo simulation, applied to the MIT Integrated Global System Model (an integrated economic and earth system model of intermediate complexity), is used to analyze the uncertain outcomes that flow from a set of century-scale emissions targets developed originally for a study by the U.S. Climate Change Science Program. Results are shown for atmospheric concentrations, radiative forcing, sea ice cover and temperature change, along with estimates of the odds of achieving particular target levels, and for the global costs of the associated mitigation policy. Comparison with other studies of climate targets are presented as evidence of the value, in understanding the climate challenge, of more complete analysis of uncertainties in human emissions and climate system response.This study received support from the MIT Joint Program on the Science and Policy of Global Change, which is funded by a consortium of government, industry and foundation sponsors
Analysing Parallel Complexity of Term Rewriting
We revisit parallel-innermost term rewriting as a model of parallel
computation on inductive data structures and provide a corresponding notion of
runtime complexity parametric in the size of the start term. We propose
automatic techniques to derive both upper and lower bounds on parallel
complexity of rewriting that enable a direct reuse of existing techniques for
sequential complexity. The applicability and the precision of the method are
demonstrated by the relatively light effort in extending the program analysis
tool AProVE and by experiments on numerous benchmarks from the literature.Comment: Extended authors' accepted manuscript for a paper accepted for
publication in the Proceedings of the 32nd International Symposium on
Logic-based Program Synthesis and Transformation (LOPSTR 2022). 27 page
Synthesis of sup-interpretations: a survey
In this paper, we survey the complexity of distinct methods that allow the
programmer to synthesize a sup-interpretation, a function providing an upper-
bound on the size of the output values computed by a program. It consists in a
static space analysis tool without consideration of the time consumption.
Although clearly related, sup-interpretation is independent from termination
since it only provides an upper bound on the terminating computations. First,
we study some undecidable properties of sup-interpretations from a theoretical
point of view. Next, we fix term rewriting systems as our computational model
and we show that a sup-interpretation can be obtained through the use of a
well-known termination technique, the polynomial interpretations. The drawback
is that such a method only applies to total functions (strongly normalizing
programs). To overcome this problem we also study sup-interpretations through
the notion of quasi-interpretation. Quasi-interpretations also suffer from a
drawback that lies in the subterm property. This property drastically restricts
the shape of the considered functions. Again we overcome this problem by
introducing a new notion of interpretations mainly based on the dependency
pairs method. We study the decidability and complexity of the
sup-interpretation synthesis problem for all these three tools over sets of
polynomials. Finally, we take benefit of some previous works on termination and
runtime complexity to infer sup-interpretations.Comment: (2012
Parallelizing Deadlock Resolution in Symbolic Synthesis of Distributed Programs
Previous work has shown that there are two major complexity barriers in the
synthesis of fault-tolerant distributed programs: (1) generation of fault-span,
the set of states reachable in the presence of faults, and (2) resolving
deadlock states, from where the program has no outgoing transitions. Of these,
the former closely resembles with model checking and, hence, techniques for
efficient verification are directly applicable to it. Hence, we focus on
expediting the latter with the use of multi-core technology.
We present two approaches for parallelization by considering different design
choices. The first approach is based on the computation of equivalence classes
of program transitions (called group computation) that are needed due to the
issue of distribution (i.e., inability of processes to atomically read and
write all program variables). We show that in most cases the speedup of this
approach is close to the ideal speedup and in some cases it is superlinear. The
second approach uses traditional technique of partitioning deadlock states
among multiple threads. However, our experiments show that the speedup for this
approach is small. Consequently, our analysis demonstrates that a simple
approach of parallelizing the group computation is likely to be the effective
method for using multi-core computing in the context of deadlock resolution
- …