74 research outputs found

    Regression with the Optimised Combination Technique

    Get PDF
    We consider the sparse grid combination technique for regression, which we regard as a problem of function reconstruction in some given function space. We use a regularised least squares approach, discretised by sparse grids and solved using the so-called combination technique, where a certain sequence of conventional grids is employed. The sparse grid solution is then obtained by addition of the partial solutions with combination coefficients dependent on the involved grids. This approach shows instabilities in certain situations and is not guaranteed to converge with higher discretisation levels. In this article we apply the recently introduced optimised combination technique, which repairs these instabilities. Now the combination coefficients also depend on the function to be reconstructed, resulting in a non-linear approximation method which achieves very competitive results. We show that the computational complexity of the improved method still scales only linear in regard to the number of data

    Suboptimal feedback control of PDEs by solving HJB equations on adaptive sparse grids

    Get PDF
    International audienceAn approach to solve finite time horizon suboptimal feedback control problems for partial differential equations is proposed by solving dynamic programming equations on adaptive sparse grids. The approach is illustrated for the wave equation and an extension to equations of Schrödinger type is indicated. A semi-discrete optimal control problem is introduced and the feedback control is derived from the corresponding value function.The value function can be characterized as the solution of an evolutionary Hamilton-Jacobi Bellman (HJB) equation which is defined over a state space whose dimension is equal to the dimension of the underlying semi-discrete system. Besides a low dimensional semi-discretization it is important to solve the HJB equation efficiently to address the curse of dimensionality.We propose to apply a semi-Lagrangian scheme using spatially adaptive sparse grids. Sparse grids allow the discretization of the value functions in (higher) space dimensions since the curse of dimensionality of full grid methods arises to a much smaller extent. For additional efficiency an adaptive grid refinement procedure is explored.We present several numerical examples studying the effect the parameters characterizing the sparse grid have on the accuracy of the value function and the optimal trajectory

    Maschinelles Lernen durch Funktionsrekonstruktion mit verallgemeinerten dĂĽnnen Gittern

    Get PDF
    Diese Arbeit beschäftigt sich mit einem neuen Ansatz für das Klassifikationsproblem beim Maschinellen Lernen durch Funktionsrekonstruktion. Es basiert auf dem Zugang des Regularisierungsnetzwerks, aber im Gegensatz zu anderen Methoden, die Ansatzfunktionen verwenden die Datenpunkten zugeordnet sind, wird nun ein so genanntes dünnes Gitter im üblicherweise hochdimensionalen Datenraum zur Diskretisierung der Minimierungsaufgabe benutzt. Genauer gesagt, wird die Dünngitterkombinations-Methode verwendet, bei der das Klassifikationsproblem auf einer bestimmten Folge konventioneller Gitter mit uniformer Maschenweite in jeder Koordinatenrichtung diskretisiert und gelöst wird. Die Dünngitterlösung wird dann durch Linearkombination aus den Lösungen dieser verschiedenen Gitter erhalten. Dieser Ansatz ermöglicht ein maschinelles Lernverfahren, welches linear in der Zahl der Datenpunkte skaliert, aber trotzdem eine nichtlineare Funktion darstellt. Allerdings ist die Zahl der behandelbaren Dimensionen bei diesem Ansatz auf Grund einer exponentiellen Abhängigkeit der Komplexität von der effektiven Dimension des Problems beschränkt. In vielen Praxisanwendungen ist die Dimension des resultierenden Problems, gegebenenfalls nach einigen Vorverarbeitungsschritten, moderat, die Menge der Daten ist aber üblicherweise sehr groß. Diese Art Anwendungen stellen die typischen Anwendungsgebiete für das in dieser Arbeit vorgestellte neue Lernverfahren dar. Das Verfahren wird auf eine Reihe von Benchmark-Datenstzen angewandt, die dabei erzielten Resultate sind mit denen der besten maschinellen Lernverfahren vergleichbar. Weiterhin wird in der vorliegenden Arbeit aufbauend auf neueren Arbeiten eine dimensionsadaptive Kombinationstechnik zur Funktionsrekonstruktion für Klassifikations- und Regressionsprobleme im maschinellen Lernen vorgestellt. Experimente mit der dimensionsadaptiven Kombinationstechnik zeigen deren grundsätzlichen Möglichkeiten im Bereich des maschinellen Lernens

    Explorative In-situ Analysis of Turbulent Flow Data Based on a Data-Driven Approach

    Get PDF
    The Proper Orthogonal Decomposition (POD) has been used for several years in the post-processing of highly-resolved Computational Fluid Dynamics (CFD) simulations. While the POD can provide valuable insights into the spatial-temporal behaviour of single transient flows, it can be challenging to evaluate and compare results when applied to multiple simulations. Therefore, we propose a workflow based on data-driven techniques, namely dimensionality reduction and clustering to extract knowledge from large simulation bundles from transient CFD simulations. We apply this workflow to investigate the flow around two cylinders that contain complex modal structures in the wake region. A special emphasis lies on the formulation of in-situ algorithms to compute the data-driven representations during run-time of the simulation. This can reduce the amount of data inand output and enables a simulation monitoring to reduce computational efforts. Finally, a classifier is trained to predict characteristic physical behaviour in the flow only based on the input parameters

    Unsupervised Representation Learning for Diverse Deformable Shape Collections

    Full text link
    We introduce a novel learning-based method for encoding and manipulating 3D surface meshes. Our method is specifically designed to create an interpretable embedding space for deformable shape collections. Unlike previous 3D mesh autoencoders that require meshes to be in a 1-to-1 correspondence, our approach is trained on diverse meshes in an unsupervised manner. Central to our method is a spectral pooling technique that establishes a universal latent space, breaking free from traditional constraints of mesh connectivity and shape categories. The entire process consists of two stages. In the first stage, we employ the functional map paradigm to extract point-to-point (p2p) maps between a collection of shapes in an unsupervised manner. These p2p maps are then utilized to construct a common latent space, which ensures straightforward interpretation and independence from mesh connectivity and shape category. Through extensive experiments, we demonstrate that our method achieves excellent reconstructions and produces more realistic and smoother interpolations than baseline approaches.Comment: Accepted at International Conference on 3D Vision 202

    Efficient Higher Order Time Discretization Schemes for Hamilton-Jacobi-Bellman Equations Based on Diagonally Implicit Symplectic Runge-Kutta Methods

    Get PDF
    We consider a semi-Lagrangian approach for the computation of the value function of a Hamilton-Jacobi-Bellman equation. This problem arises when one solves optimal feedback control problems for evaluationary partial differential equations. A time discretization with Runge-Kutta methods leads in general to a complexity of the optimization problem for the control which is exponential in the number of stages of the time scheme. Motivated by this, we introduce a time discretization based on Runge-Kutta composition methods, which achieves higher order approximation with respect to time, but where the overall optimization costs increase only linearly with respect to the number of stages of the employed Runge-Kutta method. In numerical tests we can empirically confirm an approximately linear complexity with respect to the number of stages. The presented algorithm is in particular of interest for those optimal control problems which do involve a costly minimization over the control set

    Wavelet-Packet Powered Deepfake Image Detection

    Full text link
    As neural networks become more able to generate realistic artificial images, they have the potential to improve movies, music, video games and make the internet an even more creative and inspiring place. Yet, at the same time, the latest technology potentially enables new digital ways to lie. In response, the need for a diverse and reliable toolbox arises to identify artificial images and other content. Previous work primarily relies on pixel-space CNN or the Fourier transform. To the best of our knowledge, wavelet-based gan analysis and detection methods have been absent thus far. This paper aims to fill this gap and describes a wavelet-based approach to gan-generated image analysis and detection. We evaluate our method on FFHQ, CelebA, and LSUN source identification problems and find improved or competitive performance.Comment: Source code is available at https://github.com/gan-police/frequency-forensic

    In-situ Estimation of Time-averaging Uncertainties in Turbulent Flow Simulations

    Full text link
    The statistics obtained from turbulent flow simulations are generally uncertain due to finite time averaging. The techniques available in the literature to accurately estimate these uncertainties typically only work in an offline mode, that is, they require access to all available samples of a time series at once. In addition to the impossibility of online monitoring of uncertainties during the course of simulations, such an offline approach can lead to input/output (I/O) deficiencies and large storage/memory requirements, which can be problematic for large-scale simulations of turbulent flows. Here, we designed, implemented and tested a framework for estimating time-averaging uncertainties in turbulence statistics in an in-situ (online/streaming/updating) manner. The proposed algorithm relies on a novel low-memory update formula for computing the sample-estimated autocorrelation functions (ACFs). Based on this, smooth modeled ACFs of turbulence quantities can be generated to accurately estimate the time-averaging uncertainties in the corresponding sample mean estimators. The resulting uncertainty estimates are highly robust, accurate, and quantitatively the same as those obtained by standard offline estimators. Moreover, the computational overhead added by the in-situ algorithm is found to be negligible. The framework is completely general and can be used with any flow solver and also integrated into the simulations over conformal and complex meshes created by adopting adaptive mesh refinement techniques. The results of the study are encouraging for the further development of the in-situ framework for other uncertainty quantification and data-driven analyses relevant not only to large-scale turbulent flow simulations, but also to the simulation of other dynamical systems leading to time-varying quantities with autocorrelated samples
    • …
    corecore