2,799 research outputs found
Numerical investigation of Differential Biological-Models via GA-Kansa Method Inclusive Genetic Strategy
In this paper, we use Kansa method for solving the system of differential
equations in the area of biology. One of the challenges in Kansa method is
picking out an optimum value for Shape parameter in Radial Basis Function to
achieve the best result of the method because there are not any available
analytical approaches for obtaining optimum Shape parameter. For this reason,
we design a genetic algorithm to detect a close optimum Shape parameter. The
experimental results show that this strategy is efficient in the systems of
differential models in biology such as HIV and Influenza. Furthermore, we prove
that using Pseudo-Combination formula for crossover in genetic strategy leads
to convergence in the nearly best selection of Shape parameter.Comment: 42 figures, 23 page
A Survey on Intelligent Iterative Methods for Solving Sparse Linear Algebraic Equations
Efficiently solving sparse linear algebraic equations is an important
research topic of numerical simulation. Commonly used approaches include direct
methods and iterative methods. Compared with the direct methods, the iterative
methods have lower computational complexity and memory consumption, and are
thus often used to solve large-scale sparse linear equations. However, there
are numerous iterative methods, parameters and components needed to be
carefully chosen, and an inappropriate combination may eventually lead to an
inefficient solution process in practice. With the development of deep
learning, intelligent iterative methods become popular in these years, which
can intelligently make a sufficiently good combination, optimize the parameters
and components in accordance with the properties of the input matrix. This
survey then reviews these intelligent iterative methods. To be clearer, we
shall divide our discussion into three aspects: a method aspect, a component
aspect and a parameter aspect. Moreover, we summarize the existing work and
propose potential research directions that may deserve a deep investigation
Integrated computational intelligent paradigm for nonlinear electric circuit models using neural networks, genetic algorithms and sequential quadratic programming
© 2019, Springer-Verlag London Ltd., part of Springer Nature. In this paper, a novel application of biologically inspired computing paradigm is presented for solving initial value problem (IVP) of electric circuits based on nonlinear RL model by exploiting the competency of accurate modeling with feed forward artificial neural network (FF-ANN), global search efficacy of genetic algorithms (GA) and rapid local search with sequential quadratic programming (SQP). The fitness function for IVP of associated nonlinear RL circuit is developed by exploiting the approximation theory in mean squared error sense using an approximate FF-ANN model. Training of the networks is conducted by integrated computational heuristic based on GA-aided with SQP, i.e., GA-SQP. The designed methodology is evaluated to variants of nonlinear RL systems based on both AC and DC excitations for number of scenarios with different voltages, resistances and inductance parameters. The comparative studies of the proposed results with Adam’s numerical solutions in terms of various performance measures verify the accuracy of the scheme. Results of statistics based on Monte-Carlo simulations validate the accuracy, convergence, stability and robustness of the designed scheme for solving problem in nonlinear circuit theory
A review of surrogate models and their application to groundwater modeling
The spatially and temporally variable parameters and inputs to complex groundwater models typically result in long runtimes which hinder comprehensive calibration, sensitivity, and uncertainty analysis. Surrogate modeling aims to provide a simpler, and hence faster, model which emulates the specified output of a more complex model in function of its inputs and parameters. In this review paper, we summarize surrogate modeling techniques in three categories: data-driven, projection, and hierarchical-based approaches. Data-driven surrogates approximate a groundwater model through an empirical model that captures the input-output mapping of the original model. Projection-based models reduce the dimensionality of the parameter space by projecting the governing equations onto a basis of orthonormal vectors. In hierarchical or multifidelity methods the surrogate is created by simplifying the representation of the physical system, such as by ignoring certain processes, or reducing the numerical resolution. In discussing the application to groundwater modeling of these methods, we note several imbalances in the existing literature: a large body of work on data-driven approaches seemingly ignores major drawbacks to the methods; only a fraction of the literature focuses on creating surrogates to reproduce outputs of fully distributed groundwater models, despite these being ubiquitous in practice; and a number of the more advanced surrogate modeling methods are yet to be fully applied in a groundwater modeling context
A hybrid MGA-MSGD ANN training approach for approximate solution of linear elliptic PDEs
We introduce a hybrid "Modified Genetic Algorithm-Multilevel Stochastic
Gradient Descent" (MGA-MSGD) training algorithm that considerably improves
accuracy and efficiency of solving 3D mechanical problems described, in
strong-form, by PDEs via ANNs (Artificial Neural Networks). This presented
approach allows the selection of a number of locations of interest at which the
state variables are expected to fulfil the governing equations associated with
a physical problem. Unlike classical PDE approximation methods such as finite
differences or the finite element method, there is no need to establish and
reconstruct the physical field quantity throughout the computational domain in
order to predict the mechanical response at specific locations of interest. The
basic idea of MGA-MSGD is the manipulation of the learnable parameters'
components responsible for the error explosion so that we can train the network
with relatively larger learning rates which avoids trapping in local minima.
The proposed training approach is less sensitive to the learning rate value,
training points density and distribution, and the random initial parameters.
The distance function to minimise is where we introduce the PDEs including any
physical laws and conditions (so-called, Physics Informed ANN). The Genetic
algorithm is modified to be suitable for this type of ANN in which a
Coarse-level Stochastic Gradient Descent (CSGD) is exploited to make the
decision of the offspring qualification. Employing the presented approach, a
considerable improvement in both accuracy and efficiency, compared with
standard training algorithms such as classical SGD and Adam optimiser, is
observed. The local displacement accuracy is studied and ensured by introducing
the results of Finite Element Method (FEM) at sufficiently fine mesh as the
reference displacements. A slightly more complex problem is solved ensuring its
feasibility
Recommended from our members
Hybrid Analog-Digital Co-Processing for Scientific Computation
In the past 10 years computer architecture research has moved to more heterogeneity and less adherence to conventional abstractions. Scientists and engineers hold an unshakable belief that computing holds keys to unlocking humanity's Grand Challenges. Acting on that belief they have looked deeper into computer architecture to find specialized support for their applications. Likewise, computer architects have looked deeper into circuits and devices in search of untapped performance and efficiency. The lines between computer architecture layers---applications, algorithms, architectures, microarchitectures, circuits and devices---have blurred. Against this backdrop, a menagerie of computer architectures are on the horizon, ones that forgo basic assumptions about computer hardware, and require new thinking of how such hardware supports problems and algorithms.
This thesis is about revisiting hybrid analog-digital computing in support of diverse modern workloads. Hybrid computing had extensive applications in early computing history, and has been revisited for small-scale applications in embedded systems. But architectural support for using hybrid computing in modern workloads, at scale and with high accuracy solutions, has been lacking.
I demonstrate solving a variety of scientific computing problems, including stochastic ODEs, partial differential equations, linear algebra, and nonlinear systems of equations, as case studies in hybrid computing. I solve these problems on a system of multiple prototype analog accelerator chips built by a team at Columbia University. On that team I made contributions toward programming the chips, building the digital interface, and validating the chips' functionality. The analog accelerator chip is intended for use in conjunction with a conventional digital host computer.
The appeal and motivation for using an analog accelerator is efficiency and performance, but it comes with limitations in accuracy and problem sizes that we have to work around.
The first problem is how to do problems in this unconventional computation model. Scientific computing phrases problems as differential equations and algebraic equations. Differential equations are a continuous view of the world, while algebraic equations are a discrete one. Prior work in analog computing mostly focused on differential equations; algebraic equations played a minor role in prior work in analog computing. The secret to using the analog accelerator to support modern workloads on conventional computers is that these two viewpoints are interchangeable. The algebraic equations that underlie most workloads can be solved as differential equations,
and differential equations are naturally solvable in the analog accelerator chip. A hybrid analog-digital computer architecture can focus on solving linear and nonlinear algebra problems to support many workloads.
The second problem is how to get accurate solutions using hybrid analog-digital computing. The reason that the analog computation model gives less accurate solutions is it gives up representing numbers as digital binary numbers, and instead uses the full range of analog voltage and current to represent real numbers. Prior work has established that encoding data in analog signals gives an energy efficiency advantage as long as the analog data precision is limited. While the analog accelerator alone may be useful for energy-constrained applications where inputs and outputs are imprecise, we are more interested in using analog in conjunction with digital for precise solutions. This thesis gives novel insight that the trick to do so is to solve nonlinear problems where low-precision guesses are useful for conventional digital algorithms.
The third problem is how to solve large problems using hybrid analog-digital computing. The reason the analog computation model can't handle large problems is it gives up step-by-step discrete-time operation, instead allowing variables to evolve smoothly in continuous time. To make that happen the analog accelerator works by chaining hardware for mathematical operations end-to-end. During computation analog data flows through the hardware with no overheads in control logic and memory accesses. The downside is then the needed hardware size grows alongside problem sizes. While scientific computing researchers have for a long time split large problems into smaller subproblems to fit in digital computer constraints, this thesis is a first attempt to consider these divide-and-conquer algorithms as an essential tool in using the analog model of computation.
As we enter the post-Moore’s law era of computing, unconventional architectures will offer specialized models of computation that uniquely support specific problem types. Two prominent examples are deep neural networks and quantum computers. Recent trends in computer science research show these unconventional architectures will soon have broad adoption. In this thesis I show another specialized, unconventional architecture is to use analog accelerators to solve problems in scientific computing. Computer architecture researchers will discover other important models of computation in the future. This thesis is an example of the discovery process, implementation, and evaluation of how an unconventional architecture supports specialized workloads
A survey on low-thrust trajectory optimization approaches
In this paper, we provide a survey on available numerical approaches for solving low-thrust trajectory optimization problems. First, a general mathematical framework based on hybrid optimal control will be presented. This formulation and their elements, namely objective function, continuous and discrete state and controls, and discrete and continuous dynamics, will serve as a basis for discussion throughout the whole manuscript. Thereafter, solution approaches for classical continuous optimal control problems will be briefly introduced and their application to low-thrust trajectory optimization will be discussed. A special emphasis will be placed on the extension of the classical techniques to solve hybrid optimal control problems. Finally, an extensive review of traditional and state-of-the art methodologies and tools will be presented. They will be categorized regarding their solution approach, the objective function, the state variables, the dynamical model, and their application to planetocentric or interplanetary transfers
- …