58,521 research outputs found

    A neural network approach for the blind deconvolution of turbulent flows

    Full text link
    We present a single-layer feedforward artificial neural network architecture trained through a supervised learning approach for the deconvolution of flow variables from their coarse grained computations such as those encountered in large eddy simulations. We stress that the deconvolution procedure proposed in this investigation is blind, i.e. the deconvolved field is computed without any pre-existing information about the filtering procedure or kernel. This may be conceptually contrasted to the celebrated approximate deconvolution approaches where a filter shape is predefined for an iterative deconvolution process. We demonstrate that the proposed blind deconvolution network performs exceptionally well in the a-priori testing of both two-dimensional Kraichnan and three-dimensional Kolmogorov turbulence and shows promise in forming the backbone of a physics-augmented data-driven closure for the Navier-Stokes equations

    Semi-supervised Embedding Learning for High-dimensional Bayesian Optimization

    Full text link
    Bayesian optimization is a broadly applied methodology to optimize the expensive black-box function. Despite its success, it still faces the challenge from the high-dimensional search space. To alleviate this problem, we propose a novel Bayesian optimization framework (termed SILBO), which finds a low-dimensional space to perform Bayesian optimization iteratively through semi-supervised dimension reduction. SILBO incorporates both labeled points and unlabeled points acquired from the acquisition function to guide the embedding space learning. To accelerate the learning procedure, we present a randomized method for generating the projection matrix. Furthermore, to map from the low-dimensional space to the high-dimensional original space, we propose two mapping strategies: SILBOFZ\text{SILBO}_{FZ} and SILBOFX\text{SILBO}_{FX} according to the evaluation overhead of the objective function. Experimental results on both synthetic function and hyperparameter optimization tasks demonstrate that SILBO outperforms the existing state-of-the-art high-dimensional Bayesian optimization methods

    Machine Learning Phase Transition: An Iterative Proposal

    Full text link
    We propose an iterative proposal to estimate critical points for statistical models based on configurations by combing machine-learning tools. Firstly, phase scenarios and preliminary boundaries of phases are obtained by dimensionality-reduction techniques. Besides, this step not only provides labelled samples for the subsequent step but also is necessary for its application to novel statistical models. Secondly, making use of these samples as training set, neural networks are employed to assign labels to those samples between the phase boundaries in an iterative manner. Newly labelled samples would be put in the training set used in subsequent training and the phase boundaries would be updated as well. The average of the phase boundaries is expected to converge to the critical temperature in this proposal. In concrete examples, we implement this proposal to estimate the critical temperatures for two q-state Potts models with continuous and first order phase transitions. Linear and manifold dimensionality-reduction techniques are employed in the first step. Both a convolutional neural network and a bidirectional recurrent neural network with long short-term memory units perform well for two Potts models in the second step. The convergent behaviors of the estimations reflect the types of phase transitions. And the results indicate that our proposal may be used to explore phase transitions for new general statistical models.Comment: We focus on the iterative strategy but not the concrete tools like specific dimension-reduction techniques, CNN and BLSTM in this work. Other machine-learning tools with similar functions may be applied to new statistical models with this proposa

    Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers

    Full text link
    Deep learning has recently seen rapid development and received significant attention due to its state-of-the-art performance on previously-thought hard problems. However, because of the internal complexity and nonlinear structure of deep neural networks, the underlying decision making processes for why these models are achieving such performance are challenging and sometimes mystifying to interpret. As deep learning spreads across domains, it is of paramount importance that we equip users of deep learning with tools for understanding when a model works correctly, when it fails, and ultimately how to improve its performance. Standardized toolkits for building neural networks have helped democratize deep learning; visual analytics systems have now been developed to support model explanation, interpretation, debugging, and improvement. We present a survey of the role of visual analytics in deep learning research, which highlights its short yet impactful history and thoroughly summarizes the state-of-the-art using a human-centered interrogative framework, focusing on the Five W's and How (Why, Who, What, How, When, and Where). We conclude by highlighting research directions and open research problems. This survey helps researchers and practitioners in both visual analytics and deep learning to quickly learn key aspects of this young and rapidly growing body of research, whose impact spans a diverse range of domains.Comment: Under review for IEEE Transactions on Visualization and Computer Graphics (TVCG

    Adversarial Examples: Opportunities and Challenges

    Full text link
    Deep neural networks (DNNs) have shown huge superiority over humans in image recognition, speech processing, autonomous vehicles and medical diagnosis. However, recent studies indicate that DNNs are vulnerable to adversarial examples (AEs), which are designed by attackers to fool deep learning models. Different from real examples, AEs can mislead the model to predict incorrect outputs while hardly be distinguished by human eyes, therefore threaten security-critical deep-learning applications. In recent years, the generation and defense of AEs have become a research hotspot in the field of artificial intelligence (AI) security. This article reviews the latest research progress of AEs. First, we introduce the concept, cause, characteristics and evaluation metrics of AEs, then give a survey on the state-of-the-art AE generation methods with the discussion of advantages and disadvantages. After that, we review the existing defenses and discuss their limitations. Finally, future research opportunities and challenges on AEs are prospected.Comment: 16 pages, 13 figures, 5 table

    Augmented Artificial Intelligence: a Conceptual Framework

    Full text link
    All artificial Intelligence (AI) systems make errors. These errors are unexpected, and differ often from the typical human mistakes ("non-human" errors). The AI errors should be corrected without damage of existing skills and, hopefully, avoiding direct human expertise. This paper presents an initial summary report of project taking new and systematic approach to improving the intellectual effectiveness of the individual AI by communities of AIs. We combine some ideas of learning in heterogeneous multiagent systems with new and original mathematical approaches for non-iterative corrections of errors of legacy AI systems. The mathematical foundations of AI non-destructive correction are presented and a series of new stochastic separation theorems is proven. These theorems provide a new instrument for the development, analysis, and assessment of machine learning methods and algorithms in high dimension. They demonstrate that in high dimensions and even for exponentially large samples, linear classifiers in their classical Fisher's form are powerful enough to separate errors from correct responses with high probability and to provide efficient solution to the non-destructive corrector problem. In particular, we prove some hypotheses formulated in our paper `Stochastic Separation Theorems' (Neural Networks, 94, 255--259, 2017), and answer one general problem published by Donoho and Tanner in 2009.Comment: The mathematical part is significantly extended. New stochastic separation theorems are proven for log-concave distributions. Some previously formulated hypotheses are confirme

    Molecular enhanced sampling with autoencoders: On-the-fly collective variable discovery and accelerated free energy landscape exploration

    Full text link
    Macromolecular and biomolecular folding landscapes typically contain high free energy barriers that impede efficient sampling of configurational space by standard molecular dynamics simulation. Biased sampling can artificially drive the simulation along pre-specified collective variables (CVs), but success depends critically on the availability of good CVs associated with the important collective dynamical motions. Nonlinear machine learning techniques can identify such CVs but typically do not furnish an explicit relationship with the atomic coordinates necessary to perform biased sampling. In this work, we employ auto-associative artificial neural networks ("autoencoders") to learn nonlinear CVs that are explicit and differentiable functions of the atomic coordinates. Our approach offers substantial speedups in exploration of configurational space, and is distinguished from exiting approaches by its capacity to simultaneously discover and directly accelerate along data-driven CVs. We demonstrate the approach in simulations of alanine dipeptide and Trp-cage, and have developed an open-source and freely-available implementation within OpenMM

    Variational training of neural network approximations of solution maps for physical models

    Full text link
    A novel solve-training framework is proposed to train neural network in representing low dimensional solution maps of physical models. Solve-training framework uses the neural network as the ansatz of the solution map and train the network variationally via loss functions from the underlying physical models. Solve-training framework avoids expensive data preparation in the traditional supervised training procedure, which prepares labels for input data, and still achieves effective representation of the solution map adapted to the input data distribution. The efficiency of solve-training framework is demonstrated through obtaining solutions maps for linear and nonlinear elliptic equations, and maps from potentials to ground states of linear and nonlinear Schr\"odinger equations

    Low dose CT reconstruction assisted by an image manifold prior

    Full text link
    X-ray Computed Tomography (CT) is an important tool in medical imaging to obtain a direct visualization of patient anatomy. However, the x-ray radiation exposure leads to the concern of lifetime cancer risk. Low-dose CT scan can reduce the radiation exposure to patient while the image quality is usually degraded due to the appearance of noise and artifacts. Numerous studies have been conducted to regularize CT image for better image quality. Yet, exploring the underlying manifold where real CT images residing on is still an open problem. In this paper, we propose a fully data-driven manifold learning approach by incorporating the emerging deep-learning technology. An encoder-decoder convolutional neural network has been established to map a CT image to the inherent low-dimensional manifold, as well as to restore the CT image from its corresponding manifold representation. A novel reconstruction algorithm assisted by the leant manifold prior has been developed to achieve high quality low-dose CT reconstruction. In order to demonstrate the effectiveness of the proposed framework, network training, testing, and comprehensive simulation study have been performed using patient abdomen CT images. The trained encoder-decoder CNN is capable of restoring high-quality CT images with average error of ~20 HU. Furthermore, the proposed manifold prior assisted reconstruction scheme achieves high-quality low-dose CT reconstruction, with average reconstruction error of < 30 HU, more than five times and two times lower than that of filtered back projection method and total-variation based iterative reconstruction method, respectively

    Solving the Quantum Many-Body Problem with Artificial Neural Networks

    Full text link
    The challenge posed by the many-body problem in quantum physics originates from the difficulty of describing the non-trivial correlations encoded in the exponential complexity of the many-body wave function. Here we demonstrate that systematic machine learning of the wave function can reduce this complexity to a tractable computational form, for some notable cases of physical interest. We introduce a variational representation of quantum states based on artificial neural networks with variable number of hidden neurons. A reinforcement-learning scheme is then demonstrated, capable of either finding the ground-state or describing the unitary time evolution of complex interacting quantum systems. We show that this approach achieves very high accuracy in the description of equilibrium and dynamical properties of prototypical interacting spins models in both one and two dimensions, thus offering a new powerful tool to solve the quantum many-body problem
    • …
    corecore