2,381 research outputs found
Tensor Computation: A New Framework for High-Dimensional Problems in EDA
Many critical EDA problems suffer from the curse of dimensionality, i.e. the
very fast-scaling computational burden produced by large number of parameters
and/or unknown variables. This phenomenon may be caused by multiple spatial or
temporal factors (e.g. 3-D field solvers discretizations and multi-rate circuit
simulation), nonlinearity of devices and circuits, large number of design or
optimization parameters (e.g. full-chip routing/placement and circuit sizing),
or extensive process variations (e.g. variability/reliability analysis and
design for manufacturability). The computational challenges generated by such
high dimensional problems are generally hard to handle efficiently with
traditional EDA core algorithms that are based on matrix and vector
computation. This paper presents "tensor computation" as an alternative general
framework for the development of efficient EDA algorithms and tools. A tensor
is a high-dimensional generalization of a matrix and a vector, and is a natural
choice for both storing and solving efficiently high-dimensional EDA problems.
This paper gives a basic tutorial on tensors, demonstrates some recent examples
of EDA applications (e.g., nonlinear circuit modeling and high-dimensional
uncertainty quantification), and suggests further open EDA problems where the
use of tensor computation could be of advantage.Comment: 14 figures. Accepted by IEEE Trans. CAD of Integrated Circuits and
System
Tensor Network alternating linear scheme for MIMO Volterra system identification
This article introduces two Tensor Network-based iterative algorithms for the
identification of high-order discrete-time nonlinear multiple-input
multiple-output (MIMO) Volterra systems. The system identification problem is
rewritten in terms of a Volterra tensor, which is never explicitly constructed,
thus avoiding the curse of dimensionality. It is shown how each iteration of
the two identification algorithms involves solving a linear system of low
computational complexity. The proposed algorithms are guaranteed to
monotonically converge and numerical stability is ensured through the use of
orthogonal matrix factorizations. The performance and accuracy of the two
identification algorithms are illustrated by numerical experiments, where
accurate degree-10 MIMO Volterra models are identified in about 1 second in
Matlab on a standard desktop pc
Identifying nonlinear wave interactions in plasmas using two-point measurements: a case study of Short Large Amplitude Magnetic Structures (SLAMS)
A framework is described for estimating Linear growth rates and spectral
energy transfers in turbulent wave-fields using two-point measurements. This
approach, which is based on Volterra series, is applied to dual satellite data
gathered in the vicinity of the Earth's bow shock, where Short Large Amplitude
Magnetic Structures (SLAMS) supposedly play a leading role. The analysis
attests the dynamic evolution of the SLAMS and reveals an energy cascade toward
high-frequency waves.Comment: 26 pages, 13 figure
Status of the differential transformation method
Further to a recent controversy on whether the differential transformation
method (DTM) for solving a differential equation is purely and solely the
traditional Taylor series method, it is emphasized that the DTM is currently
used, often only, as a technique for (analytically) calculating the power
series of the solution (in terms of the initial value parameters). Sometimes, a
piecewise analytic continuation process is implemented either in a numerical
routine (e.g., within a shooting method) or in a semi-analytical procedure
(e.g., to solve a boundary value problem). Emphasized also is the fact that, at
the time of its invention, the currently-used basic ingredients of the DTM
(that transform a differential equation into a difference equation of same
order that is iteratively solvable) were already known for a long time by the
"traditional"-Taylor-method users (notably in the elaboration of software
packages --numerical routines-- for automatically solving ordinary differential
equations). At now, the defenders of the DTM still ignore the, though much
better developed, studies of the "traditional"-Taylor-method users who, in
turn, seem to ignore similarly the existence of the DTM. The DTM has been given
an apparent strong formalization (set on the same footing as the Fourier,
Laplace or Mellin transformations). Though often used trivially, it is easily
attainable and easily adaptable to different kinds of differentiation
procedures. That has made it very attractive. Hence applications to various
problems of the Taylor method, and more generally of the power series method
(including noninteger powers) has been sketched. It seems that its potential
has not been exploited as it could be. After a discussion on the reasons of the
"misunderstandings" which have caused the controversy, the preceding topics are
concretely illustrated.Comment: To appear in Applied Mathematics and Computation, 29 pages,
references and further considerations adde
On Designing Multicore-aware Simulators for Biological Systems
The stochastic simulation of biological systems is an increasingly popular
technique in bioinformatics. It often is an enlightening technique, which may
however result in being computational expensive. We discuss the main
opportunities to speed it up on multi-core platforms, which pose new challenges
for parallelisation techniques. These opportunities are developed in two
general families of solutions involving both the single simulation and a bulk
of independent simulations (either replicas of derived from parameter sweep).
Proposed solutions are tested on the parallelisation of the CWC simulator
(Calculus of Wrapped Compartments) that is carried out according to proposed
solutions by way of the FastFlow programming framework making possible fast
development and efficient execution on multi-cores.Comment: 19 pages + cover pag
Optimization Monte Carlo: Efficient and Embarrassingly Parallel Likelihood-Free Inference
We describe an embarrassingly parallel, anytime Monte Carlo method for
likelihood-free models. The algorithm starts with the view that the
stochasticity of the pseudo-samples generated by the simulator can be
controlled externally by a vector of random numbers u, in such a way that the
outcome, knowing u, is deterministic. For each instantiation of u we run an
optimization procedure to minimize the distance between summary statistics of
the simulator and the data. After reweighing these samples using the prior and
the Jacobian (accounting for the change of volume in transforming from the
space of summary statistics to the space of parameters) we show that this
weighted ensemble represents a Monte Carlo estimate of the posterior
distribution. The procedure can be run embarrassingly parallel (each node
handling one sample) and anytime (by allocating resources to the worst
performing sample). The procedure is validated on six experiments.Comment: NIPS 2015 camera read
- …