8 research outputs found

    PREPRINT: Comparison of deep learning and hand crafted features for mining simulation data

    Full text link
    Computational Fluid Dynamics (CFD) simulations are a very important tool for many industrial applications, such as aerodynamic optimization of engineering designs like cars shapes, airplanes parts etc. The output of such simulations, in particular the calculated flow fields, are usually very complex and hard to interpret for realistic three-dimensional real-world applications, especially if time-dependent simulations are investigated. Automated data analysis methods are warranted but a non-trivial obstacle is given by the very large dimensionality of the data. A flow field typically consists of six measurement values for each point of the computational grid in 3D space and time (velocity vector values, turbulent kinetic energy, pressure and viscosity). In this paper we address the task of extracting meaningful results in an automated manner from such high dimensional data sets. We propose deep learning methods which are capable of processing such data and which can be trained to solve relevant tasks on simulation data, i.e. predicting drag and lift forces applied on an airfoil. We also propose an adaptation of the classical hand crafted features known from computer vision to address the same problem and compare a large variety of descriptors and detectors. Finally, we compile a large dataset of 2D simulations of the flow field around airfoils which contains 16000 flow fields with which we tested and compared approaches. Our results show that the deep learning-based methods, as well as hand crafted feature based approaches, are well-capable to accurately describe the content of the CFD simulation output on the proposed dataset

    Robust Physics Informed Neural Networks

    Full text link
    We introduce a Robust version of the Physics-Informed Neural Networks (RPINNs) to approximate the Partial Differential Equations (PDEs) solution. Standard Physics Informed Neural Networks (PINN) takes into account the governing physical laws described by PDE during the learning process. The network is trained on a data set that consists of randomly selected points in the physical domain and its boundary. PINNs have been successfully applied to solve various problems described by PDEs with boundary conditions. The loss function in traditional PINNs is based on the strong residuals of the PDEs. This loss function in PINNs is generally not robust with respect to the true error. The loss function in PINNs can be far from the true error, which makes the training process more difficult. In particular, we do not know if the training process has already converged to the solution with the required accuracy. This is especially true if we do not know the exact solution, so we cannot estimate the true error during the training. This paper introduces a different way of defining the loss function. It incorporates the residual and the inverse of the Gram matrix, computed using the energy norm. We test our RPINN algorithm on two Laplace problems and one advection-diffusion problem in two spatial dimensions. We conclude that RPINN is a robust method. The proposed loss coincides well with the true error of the solution, as measured in the energy norm. Thus, we know if our training process goes well, and we know when to stop the training to obtain the neural network approximation of the solution of the PDE with the true error of required accuracy.Comment: 33 pages, 18 figure

    A survey of traditional and deep learning-based feature descriptors for high dimensional data in computer vision

    Get PDF
    Higher dimensional data such as video and 3D are the leading edge of multimedia retrieval and computer vision research. In this survey, we give a comprehensive overview and key insights into the state of the art of higher dimensional features from deep learning and also traditional approaches. Current approaches are frequently using 3D information from the sensor or are using 3D in modeling and understanding the 3D world. With the growth of prevalent application areas such as 3D games, self-driving automobiles, health monitoring and sports activity training, a wide variety of new sensors have allowed researchers to develop feature description models beyond 2D. Although higher dimensional data enhance the performance of methods on numerous tasks, they can also introduce new challenges and problems. The higher dimensionality of the data often leads to more complicated structures which present additional problems in both extracting meaningful content and in adapting it for current machine learning algorithms. Due to the major importance of the evaluation process, we also present an overview of the current datasets and benchmarks. Moreover, based on more than 330 papers from this study, we present the major challenges and future directions. Computer Systems, Imagery and Medi

    Deep Multilayer Convolution Frameworks for Data-Driven Learning of Nonlinear Dynamics in Fluid Flows

    Get PDF
    Abundance of measurement and simulation data has led to the proliferation of machine learning tools for model-based analysis and prediction of fluid flows over the past few years. In this work we explore globally optimal multilayer convolution models such as feed forward neural-networks (FFNN) for learning and predicting dynamics from transient fluid flow data. While machine learning in general depends on data quality relative to the underlying dynamics of the system, it is important for a given data-driven learning architecture to make the most of this available information. To this end, we cast the suite of recently popular data-driven learning approaches that approximate Markovian dynamics through a linear model in a higher-dimensional feature space as a multilayer architecture similar to neural networks, but with layer-wise locally optimal convolution mappings. As a contrast, we also represent the traditional neural networks with some slight modifications as a multilayer architecture, with convolution maps optimized to minimize the global learning cost (i.e., not the cost of learning across two immediate layers). We show through examples of data-driven learning of canonical fluid flows that globally optimal FFNN-like methods owe their success to leveraging the extended learning parameter space available in multilayer models to achieve a common goal of minimizing the training cost function while incorporating nonlinear function maps between layers. On the other hand, locally optimal multilayer models also show improvement from the same factors, but behave like shallow neural networks requiring much larger hidden layers to achieve comparable learning and prediction accuracy. While locally optimal methods allow for forward-backward convolutions, the standard globally optimal FFNNs can only handle forward maps which prevent their use as Koopman approximation tools. To this end we developed novel deep learning neural network architecture, deep Koopman network which overcome this limitation of symmetry by addition of penalty network. Further, we explored the feasibility of deep autoencoder networks (DAENs) as data-driven mappings into the observable space where the dynamics of the system can be approximated as a linear time-invariant (LTI) system. The eigenmodes and the eigenvalues of the Koopman operator provide information about the structures in the data that are associated with their unique growth rate and frequency. Naturally, the relevance of these structures and eigenvalues to the real system represented by the data is tied to how closely the Markov Linear or Koopman operator-based model approximates the real dynamics, which, in turn depends on the choice of observable. Traditional approaches for non-local optimization such as those in neural networks and deep learning are gradient-based and hence, limited to convolution basis functions whose derivatives are either known or computed accurately using numerical means. To realize the full potential of this deep learning framework, these algorithms need to be extended to arbitrary choice of convolution basis. To this end, we explored the use of gradient free optimization techniques for learning using a wider choice of functions. we illustrate these ideas by learning the dynamics from snapshots of training data and predicting the temporal evolution of canonical nonlinear fluid flows including the transient limit-cycle attractor in a cylinder wake and the instability-driven dynamics of buoyant Boussinesq flow.Mechanical and Aerospace Engineerin

    Exploring images with deep learning for classification, retrieval and synthesis

    Get PDF
    In 2018, the number of mobile phone users will reach about 4.9 billion. Assuming an average of 5 photos taken per day using the built-in cameras would result in about 9 trillion photos annually. Thus, it becomes challenging to mine semantic information from such a huge amount of visual data. To solve this challenge, deep learning, an important sub-field in machine learning, has achieved impressive developments in recent years. Inspired by its success, this thesis aims to develop new approaches in deep learning to explore and analyze image data from three research themes: classification, retrieval and synthesis. In summary, the research of this thesis contributes at three levels: models and algorithms, practical scenarios and empirical analysis. First, this work presents new approaches based on deep learning to address eight research questions regarding the three themes. In addition, it aims towards adapting the approaches to practical scenarios in real world. Furthermore, this thesis provides numerous experiments and in-depth analysis, which can help motivate further research on the three research themes. Computer Vision Multimedia Applications Deep Learning China Scholarship Council (CSC)Computer Systems, Imagery and Medi
    corecore