30,034 research outputs found

    Generation of Explicit Knowledge from Empirical Data through Pruning of Trainable Neural Networks

    Full text link
    This paper presents a generalized technology of extraction of explicit knowledge from data. The main ideas are 1) maximal reduction of network complexity (not only removal of neurons or synapses, but removal all the unnecessary elements and signals and reduction of the complexity of elements), 2) using of adjustable and flexible pruning process (the pruning sequence shouldn't be predetermined - the user should have a possibility to prune network on his own way in order to achieve a desired network structure for the purpose of extraction of rules of desired type and form), and 3) extraction of rules not in predetermined but any desired form. Some considerations and notes about network architecture and training process and applicability of currently developed pruning techniques and rule extraction algorithms are discussed. This technology, being developed by us for more than 10 years, allowed us to create dozens of knowledge-based expert systems. In this paper we present a generalized three-step technology of extraction of explicit knowledge from empirical data.Comment: 9 pages, The talk was given at the IJCNN '99 (Washington DC, July 1999

    Knowledge discovery for friction stir welding via data driven approaches: Part 2 – multiobjective modelling using fuzzy rule based systems

    Get PDF
    In this final part of this extensive study, a new systematic data-driven fuzzy modelling approach has been developed, taking into account both the modelling accuracy and its interpretability (transparency) as attributes. For the first time, a data-driven modelling framework has been proposed designed and implemented in order to model the intricate FSW behaviours relating to AA5083 aluminium alloy, consisting of the grain size, mechanical properties, as well as internal process properties. As a result, ‘Pareto-optimal’ predictive models have been successfully elicited which, through validations on real data for the aluminium alloy AA5083, have been shown to be accurate, transparent and generic despite the conservative number of data points used for model training and testing. Compared with analytically based methods, the proposed data-driven modelling approach provides a more effective way to construct prediction models for FSW when there is an apparent lack of fundamental process knowledge

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    Towards a Learning Theory of Cause-Effect Inference

    Full text link
    We pose causal inference as the problem of learning to classify probability distributions. In particular, we assume access to a collection {(Si,li)}i=1n\{(S_i,l_i)\}_{i=1}^n, where each SiS_i is a sample drawn from the probability distribution of Xi×YiX_i \times Y_i, and lil_i is a binary label indicating whether "XiYiX_i \to Y_i" or "XiYiX_i \leftarrow Y_i". Given these data, we build a causal inference rule in two steps. First, we featurize each SiS_i using the kernel mean embedding associated with some characteristic kernel. Second, we train a binary classifier on such embeddings to distinguish between causal directions. We present generalization bounds showing the statistical consistency and learning rates of the proposed approach, and provide a simple implementation that achieves state-of-the-art cause-effect inference. Furthermore, we extend our ideas to infer causal relationships between more than two variables
    corecore