953 research outputs found

    Deep Divergence-Based Approach to Clustering

    Get PDF
    A promising direction in deep learning research consists in learning representations and simultaneously discovering cluster structure in unlabeled data by optimizing a discriminative loss function. As opposed to supervised deep learning, this line of research is in its infancy, and how to design and optimize suitable loss functions to train deep neural networks for clustering is still an open question. Our contribution to this emerging field is a new deep clustering network that leverages the discriminative power of information-theoretic divergence measures, which have been shown to be effective in traditional clustering. We propose a novel loss function that incorporates geometric regularization constraints, thus avoiding degenerate structures of the resulting clustering partition. Experiments on synthetic benchmarks and real datasets show that the proposed network achieves competitive performance with respect to other state-of-the-art methods, scales well to large datasets, and does not require pre-training steps

    Sub-grid modelling for two-dimensional turbulence using neural networks

    Get PDF
    In this investigation, a data-driven turbulence closure framework is introduced and deployed for the sub-grid modelling of Kraichnan turbulence. The novelty of the proposed method lies in the fact that snapshots from high-fidelity numerical data are used to inform artificial neural networks for predicting the turbulence source term through localized grid-resolved information. In particular, our proposed methodology successfully establishes a map between inputs given by stencils of the vorticity and the streamfunction along with information from two well-known eddy-viscosity kernels. Through this we predict the sub-grid vorticity forcing in a temporally and spatially dynamic fashion. Our study is both a-priori and a-posteriori in nature. In the former, we present an extensive hyper-parameter optimization analysis in addition to learning quantification through probability density function based validation of sub-grid predictions. In the latter, we analyse the performance of our framework for flow evolution in a classical decaying two-dimensional turbulence test case in the presence of errors related to temporal and spatial discretization. Statistical assessments in the form of angle-averaged kinetic energy spectra demonstrate the promise of the proposed methodology for sub-grid quantity inference. In addition, it is also observed that some measure of a-posteriori error must be considered during optimal model selection for greater accuracy. The results in this article thus represent a promising development in the formalization of a framework for generation of heuristic-free turbulence closures from data
    • …
    corecore