16,554 research outputs found
A comparative numerical study of meshing functionals for variational mesh adaptation
We present a comparative numerical study for three functionals used for
variational mesh adaptation. One of them is a generalisation of Winslow's
variable diffusion functional while the others are based on equidistribution
and alignment. These functionals are known to have nice theoretical properties
and work well for most mesh adaptation problems either as a stand-alone
variational method or combined within the moving mesh framework. Their
performance is investigated numerically in terms of equidistribution and
alignment mesh quality measures. Numerical results in 2D and 3D are presented.Comment: Additional example (H1), journal referenc
Application of Fredholm integral equations inverse theory to the radial basis function approximation problem
This paper reveals and examines the relationship between the solution and stability of Fredholm integral equations and radial basis function approximation or interpolation. The underlying system (kernel) matrices are shown to have a smoothing property which is dependent on the choice of kernel. Instead of using the condition number to describe the ill-conditioning, hence only looking at the largest and smallest singular values of the matrix, techniques from inverse theory, particularly the Picard condition, show that it is understanding the exponential decay of the singular values which is critical for interpreting and mitigating instability. Results on the spectra of certain classes of kernel matrices are reviewed, verifying the exponential decay of the singular values. Numerical results illustrating the application of integral equation inverse theory are also provided and demonstrate that interpolation weights may be regarded as samplings of a weighted solution of an integral equation. This is then relevant for mapping from one set of radial basis function centers to another set. Techniques for the solution of integral equations can be further exploited in future studies to find stable solutions and to reduce the impact of errors in the data
Deep Learning Techniques for Music Generation -- A Survey
This paper is a survey and an analysis of different ways of using deep
learning (deep artificial neural networks) to generate musical content. We
propose a methodology based on five dimensions for our analysis:
Objective - What musical content is to be generated? Examples are: melody,
polyphony, accompaniment or counterpoint. - For what destination and for what
use? To be performed by a human(s) (in the case of a musical score), or by a
machine (in the case of an audio file).
Representation - What are the concepts to be manipulated? Examples are:
waveform, spectrogram, note, chord, meter and beat. - What format is to be
used? Examples are: MIDI, piano roll or text. - How will the representation be
encoded? Examples are: scalar, one-hot or many-hot.
Architecture - What type(s) of deep neural network is (are) to be used?
Examples are: feedforward network, recurrent network, autoencoder or generative
adversarial networks.
Challenge - What are the limitations and open challenges? Examples are:
variability, interactivity and creativity.
Strategy - How do we model and control the process of generation? Examples
are: single-step feedforward, iterative feedforward, sampling or input
manipulation.
For each dimension, we conduct a comparative analysis of various models and
techniques and we propose some tentative multidimensional typology. This
typology is bottom-up, based on the analysis of many existing deep-learning
based systems for music generation selected from the relevant literature. These
systems are described and are used to exemplify the various choices of
objective, representation, architecture, challenge and strategy. The last
section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P.
Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music
Generation, Computational Synthesis and Creative Systems, Springer, 201
An informational approach to the global optimization of expensive-to-evaluate functions
In many global optimization problems motivated by engineering applications,
the number of function evaluations is severely limited by time or cost. To
ensure that each evaluation contributes to the localization of good candidates
for the role of global minimizer, a sequential choice of evaluation points is
usually carried out. In particular, when Kriging is used to interpolate past
evaluations, the uncertainty associated with the lack of information on the
function can be expressed and used to compute a number of criteria accounting
for the interest of an additional evaluation at any given point. This paper
introduces minimizer entropy as a new Kriging-based criterion for the
sequential choice of points at which the function should be evaluated. Based on
\emph{stepwise uncertainty reduction}, it accounts for the informational gain
on the minimizer expected from a new evaluation. The criterion is approximated
using conditional simulations of the Gaussian process model behind Kriging, and
then inserted into an algorithm similar in spirit to the \emph{Efficient Global
Optimization} (EGO) algorithm. An empirical comparison is carried out between
our criterion and \emph{expected improvement}, one of the reference criteria in
the literature. Experimental results indicate major evaluation savings over
EGO. Finally, the method, which we call IAGO (for Informational Approach to
Global Optimization) is extended to robust optimization problems, where both
the factors to be tuned and the function evaluations are corrupted by noise.Comment: Accepted for publication in the Journal of Global Optimization (This
is the revised version, with additional details on computational problems,
and some grammatical changes
Recommended from our members
Scalable computation of thermomechanical turbomachinery problems
A commonly held view in the turbomachinery community is that finite element
methods are not well-suited for very large-scale thermomechanical simulations.
We seek to dispel this notion by presenting performance data for a collection
of realistic, large-scale thermomechanical simulations. We describe the
necessary technology to compute problems with to
degrees-of-freedom, and emphasise what is required to achieve near linear
computational complexity with good parallel scaling. Performance data is
presented for turbomachinery components with up to 3.3 billion
degrees-of-freedom. The software libraries used to perform the simulations are
freely available under open source licenses. The performance demonstrated in
this work opens up the possibility of system-level thermomechanical modelling,
and lays the foundation for further research into high-performance formulations
for even larger problems and for other physical processes, such as contact,
that are important in turbomachinery analysis.The support of Mitsubishi Heavy Industries is gratefully acknowledged. CNR is supported by EPSRC Grant EP/N018877/1
- …