15,400 research outputs found

    Neural networks and support vector machines based bio-activity classification

    Get PDF
    Classification of various compounds into their respective biological activity classes is important in drug discovery applications from an early phase virtual compound filtering and screening point of view. In this work two types of neural networks, multi layer perceptron (MLP) and radial basis functions (RBF), and support vector machines (SVM) were employed for the classification of three types of biologically active enzyme inhibitors. Both of the networks were trained with back propagation learning method with chemical compounds whose active inhibition properties were previously known. A group of topological indices, selected with the help of principle component analysis (PCA) were used as descriptors. The results of all the three classification methods show that the performance of both the neural networks is better than the SVM

    Deep Reinforcement Learning for Swarm Systems

    Full text link
    Recently, deep reinforcement learning (RL) methods have been applied successfully to multi-agent scenarios. Typically, these methods rely on a concatenation of agent states to represent the information content required for decentralized decision making. However, concatenation scales poorly to swarm systems with a large number of homogeneous agents as it does not exploit the fundamental properties inherent to these systems: (i) the agents in the swarm are interchangeable and (ii) the exact number of agents in the swarm is irrelevant. Therefore, we propose a new state representation for deep multi-agent RL based on mean embeddings of distributions. We treat the agents as samples of a distribution and use the empirical mean embedding as input for a decentralized policy. We define different feature spaces of the mean embedding using histograms, radial basis functions and a neural network learned end-to-end. We evaluate the representation on two well known problems from the swarm literature (rendezvous and pursuit evasion), in a globally and locally observable setup. For the local setup we furthermore introduce simple communication protocols. Of all approaches, the mean embedding representation using neural network features enables the richest information exchange between neighboring agents facilitating the development of more complex collective strategies.Comment: 31 pages, 12 figures, version 3 (published in JMLR Volume 20

    Forecasting the geomagnetic activity of the Dst Index using radial basis function networks

    Get PDF
    The Dst index is a key parameter which characterises the disturbance of the geomagnetic field in magnetic storms. Modelling of the Dst index is thus very important for the analysis of the geomagnetic field. A data-based modelling approach, aimed at obtaining efficient models based on limited input-output observational data, provides a powerful tool for analysing and forecasting geomagnetic activities including the prediction of the Dst index. Radial basis function (RBF) networks are an important and popular network model for nonlinear system identification and dynamical modelling. A novel generalised multiscale RBF (MSRBF) network is introduced for Dst index modelling. The proposed MSRBF network can easily be converted into a linear-in-the-parameters form and the training of the linear network model can easily be implemented using an orthogonal least squares (OLS) type algorithm. One advantage of the new MSRBF network, compared with traditional single scale RBF networks, is that the new network is more flexible for describing complex nonlinear dynamical systems

    A representer theorem for deep kernel learning

    Full text link
    In this paper we provide a finite-sample and an infinite-sample representer theorem for the concatenation of (linear combinations of) kernel functions of reproducing kernel Hilbert spaces. These results serve as mathematical foundation for the analysis of machine learning algorithms based on compositions of functions. As a direct consequence in the finite-sample case, the corresponding infinite-dimensional minimization problems can be recast into (nonlinear) finite-dimensional minimization problems, which can be tackled with nonlinear optimization algorithms. Moreover, we show how concatenated machine learning problems can be reformulated as neural networks and how our representer theorem applies to a broad class of state-of-the-art deep learning methods
    corecore