4,471 research outputs found
Quantization in Control Systems and Forward Error Analysis of Iterative Numerical Algorithms
The use of control theory to study iterative algorithms, which can be considered as dynamical systems, opens many opportunities to find new tools for analysis of algorithms. In this paper we show that results from the study of quantization effects in control systems can be used to find systematic ways for forward error analysis of iterative algorithms. The proposed schemes are applied to the classical iterative methods for solving a system of linear equations. The obtained bounds are compared with bounds given in the numerical analysis literature
A dynamic convergence control scheme for the solution of the radial equilibrium equation in through-flow analyses
One of the most frequently encountered numerical problems in scientific analyses
is the solution of non-linear equations. Often the analysis of complex phenomena
falls beyond the range of applicability of the numerical methods available in
the public domain, and demands the design of dedicated algorithms that will
approximate, to a specified precision, the mathematical solution of specific
problems. These algorithms can be developed from scratch or through the
amalgamation of existing techniques. The accurate solution of the full radial
equilibrium equation (REE) in streamline curvature (SLC) through-flow analyses
presents such a case. This article discusses the development, validation, and
application of an 'intelligent' dynamic convergence control (DCC) algorithm for
the fast, accurate, and robust numerical solution of the non-linear equations of
motion for two-dimensional flow fields. The algorithm was developed to eliminate
the large extent of user intervention, usually required by standard numerical
methods. The DCC algorithm was integrated into a turbomachinery design and
performance simulation software tool and was tested rigorously, particularly at
compressor operating regimes traditionally exhibiting convergence difficulties
(i.e. far off-design conditions). Typical error histories and comparisons of
simulated results against experimental are presented in this article for a
particular case study. For all case studies examined, it was found that the
algorithm could successfully 'guide' the solution down to the specified error
tolerance, at the expense of a slightly slower iteration process (compared to a
conventional Newton-Raphson scheme). This hybrid DCC algorithm can also find use
in many other engineering and scientific applications that require the robust
solution of mathematical problems by numerical instead of analytical means
Interpolatory methods for model reduction of multi-input/multi-output systems
We develop here a computationally effective approach for producing
high-quality -approximations to large scale linear
dynamical systems having multiple inputs and multiple outputs (MIMO). We extend
an approach for model reduction introduced by Flagg,
Beattie, and Gugercin for the single-input/single-output (SISO) setting, which
combined ideas originating in interpolatory -optimal model
reduction with complex Chebyshev approximation. Retaining this framework, our
approach to the MIMO problem has its principal computational cost dominated by
(sparse) linear solves, and so it can remain an effective strategy in many
large-scale settings. We are able to avoid computationally demanding
norm calculations that are normally required to monitor
progress within each optimization cycle through the use of "data-driven"
rational approximations that are built upon previously computed function
samples. Numerical examples are included that illustrate our approach. We
produce high fidelity reduced models having consistently better
performance than models produced via balanced truncation;
these models often are as good as (and occasionally better than) models
produced using optimal Hankel norm approximation as well. In all cases
considered, the method described here produces reduced models at far lower cost
than is possible with either balanced truncation or optimal Hankel norm
approximation
Sparse Recovery via Differential Inclusions
In this paper, we recover sparse signals from their noisy linear measurements
by solving nonlinear differential inclusions, which is based on the notion of
inverse scale space (ISS) developed in applied mathematics. Our goal here is to
bring this idea to address a challenging problem in statistics, \emph{i.e.}
finding the oracle estimator which is unbiased and sign-consistent using
dynamics. We call our dynamics \emph{Bregman ISS} and \emph{Linearized Bregman
ISS}. A well-known shortcoming of LASSO and any convex regularization
approaches lies in the bias of estimators. However, we show that under proper
conditions, there exists a bias-free and sign-consistent point on the solution
paths of such dynamics, which corresponds to a signal that is the unbiased
estimate of the true signal and whose entries have the same signs as those of
the true signs, \emph{i.e.} the oracle estimator. Therefore, their solution
paths are regularization paths better than the LASSO regularization path, since
the points on the latter path are biased when sign-consistency is reached. We
also show how to efficiently compute their solution paths in both continuous
and discretized settings: the full solution paths can be exactly computed piece
by piece, and a discretization leads to \emph{Linearized Bregman iteration},
which is a simple iterative thresholding rule and easy to parallelize.
Theoretical guarantees such as sign-consistency and minimax optimal -error
bounds are established in both continuous and discrete settings for specific
points on the paths. Early-stopping rules for identifying these points are
given. The key treatment relies on the development of differential inequalities
for differential inclusions and their discretizations, which extends the
previous results and leads to exponentially fast recovering of sparse signals
before selecting wrong ones.Comment: In Applied and Computational Harmonic Analysis, 201
A novel numerical framework for simulation of multiscale spatio-temporally non-linear systems in additive manufacturing processes.
New computationally efficient numerical techniques have been formulated for multi-scale analysis in order to bridge mesoscopic and macroscopic scales of thermal and mechanical responses of a material. These numerical techniques will reduce computational efforts required to simulate metal based Additive Manufacturing (AM) processes. Considering the availability of physics based constitutive models for response at mesoscopic scales, these techniques will help in the evaluation of the thermal response and mechanical properties during layer-by-layer processing in AM. Two classes of numerical techniques have been explored. The first class of numerical techniques has been developed for evaluating the periodic spatiotemporal thermal response involving multiple time and spatial scales at the continuum level. The second class of numerical techniques is targeted at modeling multi-scale multi-energy dissipative phenomena during the solid state Ultrasonic Consolidation process. This includes bridging the mesoscopic response of a crystal plasticity finite element framework at inter- and intragranular scales and a point at the macroscopic scale. This response has been used to develop an energy dissipative constitutive model for a multi-surface interface at the macroscopic scale. An adaptive dynamic meshing strategy as a part of first class of numerical techniques has been developed which reduces computational cost by efficient node element renumbering and assembly of stiffness matrices. This strategy has been able to reduce the computational cost for solving thermal simulation of Selective Laser Melting process by ~100 times. This method is not limited to SLM processes and can be extended to any other fusion based additive manufacturing process and more generally to any moving energy source finite element problem. Novel FEM based beam theories have been formulated which are more general in nature compared to traditional beam theories for solid deformation. These theories have been the first to simulate thermal problems similar to a solid beam analysis approach. These are more general in nature and are capable of simulating general cross-section beams with an ability to match results for complete three dimensional analysis. In addition to this, a traditional Cholesky decomposition algorithm has been modified to reduce the computational cost of solving simultaneous equations involved in FEM simulations. Solid state processes have been simulated with crystal plasticity based nonlinear finite element algorithms. This algorithm has been further sped up by introduction of an interfacial contact constitutive model formulation. This framework has been supported by a novel methodology to solve contact problems without additional computational overhead to incorporate constraint equations averting the usage of penalty springs
- …