1,591 research outputs found

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    Life-Space Foam: a Medium for Motivational and Cognitive Dynamics

    Full text link
    General stochastic dynamics, developed in a framework of Feynman path integrals, have been applied to Lewinian field--theoretic psychodynamics, resulting in the development of a new concept of life--space foam (LSF) as a natural medium for motivational and cognitive psychodynamics. According to LSF formalisms, the classic Lewinian life space can be macroscopically represented as a smooth manifold with steady force-fields and behavioral paths, while at the microscopic level it is more realistically represented as a collection of wildly fluctuating force-fields, (loco)motion paths and local geometries (and topologies with holes). A set of least-action principles is used to model the smoothness of global, macro-level LSF paths, fields and geometry. To model the corresponding local, micro-level LSF structures, an adaptive path integral is used, defining a multi-phase and multi-path (multi-field and multi-geometry) transition process from intention to goal-driven action. Application examples of this new approach include (but are not limited to) information processing, motivational fatigue, learning, memory and decision-making.Comment: 25 pages, 2 figures, elsar

    TopologyNet: Topology based deep convolutional neural networks for biomolecular property predictions

    Full text link
    Although deep learning approaches have had tremendous success in image, video and audio processing, computer vision, and speech recognition, their applications to three-dimensional (3D) biomolecular structural data sets have been hindered by the entangled geometric complexity and biological complexity. We introduce topology, i.e., element specific persistent homology (ESPH), to untangle geometric complexity and biological complexity. ESPH represents 3D complex geometry by one-dimensional (1D) topological invariants and retains crucial biological information via a multichannel image representation. It is able to reveal hidden structure-function relationships in biomolecules. We further integrate ESPH and convolutional neural networks to construct a multichannel topological neural network (TopologyNet) for the predictions of protein-ligand binding affinities and protein stability changes upon mutation. To overcome the limitations to deep learning arising from small and noisy training sets, we present a multitask topological convolutional neural network (MT-TCNN). We demonstrate that the present TopologyNet architectures outperform other state-of-the-art methods in the predictions of protein-ligand binding affinities, globular protein mutation impacts, and membrane protein mutation impacts.Comment: 20 pages, 8 figures, 5 table

    Importance sampling for option pricing with feedforward neural networks

    Full text link
    We study the problem of reducing the variance of Monte Carlo estimators through performing suitable changes of the sampling measure which are induced by feedforward neural networks. To this end, building on the concept of vector stochastic integration, we characterize the Cameron-Martin spaces of a large class of Gaussian measures which are induced by vector-valued continuous local martingales with deterministic covariation. We prove that feedforward neural networks enjoy, up to an isometry, the universal approximation property in these topological spaces. We then prove that sampling measures which are generated by feedforward neural networks can approximate the optimal sampling measure arbitrarily well. We conclude with a comprehensive numerical study pricing path-dependent European options for asset price models that incorporate factors such as changing business activity, knock-out barriers, dynamic correlations, and high-dimensional baskets
    corecore