1,345 research outputs found
Inner product preconditioned trust-region methods for frequency-domain full waveform inversion
Full waveform inversion is a seismic imaging method which requires to solve a large-scale minimization problem, typically through local optimization techniques. Most local optimization methods can basically be built up from two choices: the update direction and the strategy to control its length. In the context of full waveform inversion, this strategy is very often a line search. We here propose to use instead a trust-region method, in combination with non-standard inner products which act as preconditioners.
More specifically, a line search and several trust-region variants of the steepest descent, the limited memory BFGS algorithm and the inexact Newton method are presented and compared. A strong emphasis is given to the inner product choice. For example, its link with preconditioning the update direction and its implication in the trust-region constraint are highlighted.
A first numerical test is performed on a 2D synthetic model then a second configuration, containing two close reflectors, is studied. The latter configuration is known to be challenging because of multiple reflections. Based on these two case studies, the importance of an appropriate inner product choice is highlighted and the best trust-region method is selected and compared to the line search method. In particular we were able to demonstrate that using an appropriate inner product greatly improves the convergence of all the presented methods and that inexact Newton methods should be combined with trust-region methods to increase their convergence speed
Measuring RocksDB performance and adaptive sampling for model estimation
This thesis focuses on two topics, namely statistical learning and the prediction of key performance indicators in the performance evaluation of a storage engine.
The part on statistical learning presents a novel algorithm adjusting the sampling size for the Monte Carlo approximation of the function to be minimized, allowing a reduction of the true function at a given probability and this, at a lower numerical cost.
The sampling strategy is embedded in a trust-region algorithm, using the Fisher Information matrix, also called BHHH approximation, to approximate the Hessian matrix. The sampling strategy is tested on a logit model generated from synthetic data.
Numerical results exhibit a significant reduction in the time required to optimize the model when an adequate smoothing is applied to the function.
The key performance indicator prediction part describes a novel strategy to select better settings for RocksDB that optimize its throughput, using the log files to analyze and identify suboptimal parameters, opening the possibility to greatly accelerate modern storage engine tuning.Ce mémoire s’intéresse à deux sujets, un relié à l’apprentisage statistique et le second à la
prédiction d’indicateurs de performance dans un système de stockage de type clé-valeur.
La partie sur l’apprentissage statistique développe un algorithme ajustant la taille
d’échantillonnage pour l’approximation Monte Carlo de la fonction à minimiser, permettant
une réduction de la véritable fonction avec une probabilité donnée, et ce à un coût
numérique moindre. La stratégie d’échantillonnage est développée dans un contexte de région
de confiance en utilisant la matrice d’information de Fisher, aussi appelée approximation
BHHH de la matrice hessienne. La stratégie d’échantillonnage est testée sur un modèle logit
généré à partir de données synthétiques suivant le même modèle. Les résultats numériques
montrent une réduction siginificative du temps requis pour optimiser le modèle lorsqu’un
lissage adéquat est appliqué.
La partie de prédiction d’indicateurs de performance décrit une nouvelle approche pour
optimiser la vitesse maximale d’insertion de paire clé-valeur dans le système de stockage
RocksDB. Les fichiers journaux sont utilisés pour identifier les paramètres sous-optimaux du
système et accélérer la recherche de paramètres optimaux
Inexact restoration with subsampled trust-region methods for finite-sum minimization
Convex and nonconvex finite-sum minimization arises in many scientific
computing and machine learning applications. Recently, first-order and
second-order methods where objective functions, gradients and Hessians are
approximated by randomly sampling components of the sum have received great
attention. We propose a new trust-region method which employs suitable
approximations of the objective function, gradient and Hessian built via random
subsampling techniques. The choice of the sample size is deterministic and
ruled by the inexact restoration approach. We discuss local and global
properties for finding approximate first- and second-order optimal points and
function evaluation complexity results. Numerical experience shows that the new
procedure is more efficient, in terms of overall computational cost, than the
standard trust-region scheme with subsampled Hessians
- …