28,452 research outputs found

    TopologyNet: Topology based deep convolutional neural networks for biomolecular property predictions

    Full text link
    Although deep learning approaches have had tremendous success in image, video and audio processing, computer vision, and speech recognition, their applications to three-dimensional (3D) biomolecular structural data sets have been hindered by the entangled geometric complexity and biological complexity. We introduce topology, i.e., element specific persistent homology (ESPH), to untangle geometric complexity and biological complexity. ESPH represents 3D complex geometry by one-dimensional (1D) topological invariants and retains crucial biological information via a multichannel image representation. It is able to reveal hidden structure-function relationships in biomolecules. We further integrate ESPH and convolutional neural networks to construct a multichannel topological neural network (TopologyNet) for the predictions of protein-ligand binding affinities and protein stability changes upon mutation. To overcome the limitations to deep learning arising from small and noisy training sets, we present a multitask topological convolutional neural network (MT-TCNN). We demonstrate that the present TopologyNet architectures outperform other state-of-the-art methods in the predictions of protein-ligand binding affinities, globular protein mutation impacts, and membrane protein mutation impacts.Comment: 20 pages, 8 figures, 5 table

    DIMAL: Deep Isometric Manifold Learning Using Sparse Geodesic Sampling

    Full text link
    This paper explores a fully unsupervised deep learning approach for computing distance-preserving maps that generate low-dimensional embeddings for a certain class of manifolds. We use the Siamese configuration to train a neural network to solve the problem of least squares multidimensional scaling for generating maps that approximately preserve geodesic distances. By training with only a few landmarks, we show a significantly improved local and nonlocal generalization of the isometric mapping as compared to analogous non-parametric counterparts. Importantly, the combination of a deep-learning framework with a multidimensional scaling objective enables a numerical analysis of network architectures to aid in understanding their representation power. This provides a geometric perspective to the generalizability of deep learning.Comment: 10 pages, 11 Figure

    LDMNet: Low Dimensional Manifold Regularized Neural Networks

    Full text link
    Deep neural networks have proved very successful on archetypal tasks for which large training sets are available, but when the training data are scarce, their performance suffers from overfitting. Many existing methods of reducing overfitting are data-independent, and their efficacy is often limited when the training set is very small. Data-dependent regularizations are mostly motivated by the observation that data of interest lie close to a manifold, which is typically hard to parametrize explicitly and often requires human input of tangent vectors. These methods typically only focus on the geometry of the input data, and do not necessarily encourage the networks to produce geometrically meaningful features. To resolve this, we propose a new framework, the Low-Dimensional-Manifold-regularized neural Network (LDMNet), which incorporates a feature regularization method that focuses on the geometry of both the input data and the output features. In LDMNet, we regularize the network by encouraging the combination of the input data and the output features to sample a collection of low dimensional manifolds, which are searched efficiently without explicit parametrization. To achieve this, we directly use the manifold dimension as a regularization term in a variational functional. The resulting Euler-Lagrange equation is a Laplace-Beltrami equation over a point cloud, which is solved by the point integral method without increasing the computational complexity. We demonstrate two benefits of LDMNet in the experiments. First, we show that LDMNet significantly outperforms widely-used network regularizers such as weight decay and DropOut. Second, we show that LDMNet can be designed to extract common features of an object imaged via different modalities, which proves to be very useful in real-world applications such as cross-spectral face recognition

    A Novel Design Approach to X-Band Minkowski Reflectarray Antennas using the Full-Wave EM Simulation-based Complete Neural Model with a Hybrid GA-NM Algorithm

    Get PDF
    In this work, a novel multi-objective design optimization procedure is presented for the Minkowski Reflectarray RAs using a complete 3-D CST Microwave Studio MWS-based Multilayer Perceptron Neural Network MLP NN model including the substrate constant εr with a hybrid Genetic GA and Nelder-Mead NM algorithm. The MLP NN model provides an accurate and fast model and establishes the reflection phase of a unit Minkowski RA element as a continuous function within the input domain including the substrate 1 ≤ εr ≤ 6; 0.5mm ≤ h ≤ 3mm in the frequency between 8GHz ≤ f ≤ 12GHz. This design procedure enables a designer to obtain not only the most optimum Minkowski RA design all throughout the X- band, at the same time the optimum Minkowski RAs on the selected substrates. Moreover a design of a fully optimized X-band 15×15 Minkowski RA antenna is given as a worked example with together the tolerance analysis and its performance is also compared with those of the optimized RAs on the selected traditional substrates. Finally it may be concluded that the presented robust and systematic multi-objective design procedure is conveniently applied to the Microstrip Reflectarray RAs constructed from the advanced patches
    • …
    corecore