8,175 research outputs found

    Towards learning domain-independent planning heuristics

    Full text link
    Automated planning remains one of the most general paradigms in Artificial Intelligence, providing means of solving problems coming from a wide variety of domains. One of the key factors restricting the applicability of planning is its computational complexity resulting from exponentially large search spaces. Heuristic approaches are necessary to solve all but the simplest problems. In this work, we explore the possibility of obtaining domain-independent heuristic functions using machine learning. This is a part of a wider research program whose objective is to improve practical applicability of planning in systems for which the planning domains evolve at run time. The challenge is therefore the learning of (corrections of) domain-independent heuristics that can be reused across different planning domains.Comment: Accepted for the IJCAI-17 Workshop on Architectures for Generality and Autonom

    Neural Networks for Target Selection in Direct Marketing

    Get PDF
    Partly due to a growing interest in direct marketing, it has become an important application field for data mining. Many techniques have been applied to select the targets in commercial applications, such as statistical regression, regression trees, neural computing, fuzzy clustering and association rules. Modeling of charity donations has also recently been considered. The availability of a large number of techniques for analyzing the data may look overwhelming and ultimately unnecessary at first. However, the amount of data used in direct marketing is tremendous. Further, there are different types of data and likely strong nonlinear relations amongst different groups within the data. Therefore, it is unlikely that there will be a single method that can be used under all circumstances. For that reason, it is important to have access to a range of different target selection methods that can be used in a complementary fashion. In this respect, learning systems such as neural networks have the advantage that they can adapt to the nonlinearity in the data to capture the complex relations. This is an important motivation for applying neural networks for target selection. In this report, neural networks are applied to target selection in modeling of charity donations. Various stages of model building are described by using data from a large Dutch charity organization as a case. The results are compared with the results of more traditional methods for target selection such as logistic regression and CHAID.neural networks;data mining;direct mail;direct marketing;target selection

    Learning with Delayed Synaptic Plasticity

    Get PDF
    The plasticity property of biological neural networks allows them to perform learning and optimize their behavior by changing their configuration. Inspired by biology, plasticity can be modeled in artificial neural networks by using Hebbian learning rules, i.e. rules that update synapses based on the neuron activations and reinforcement signals. However, the distal reward problem arises when the reinforcement signals are not available immediately after each network output to associate the neuron activations that contributed to receiving the reinforcement signal. In this work, we extend Hebbian plasticity rules to allow learning in distal reward cases. We propose the use of neuron activation traces (NATs) to provide additional data storage in each synapse to keep track of the activation of the neurons. Delayed reinforcement signals are provided after each episode relative to the networks' performance during the previous episode. We employ genetic algorithms to evolve delayed synaptic plasticity (DSP) rules and perform synaptic updates based on NATs and delayed reinforcement signals. We compare DSP with an analogous hill climbing algorithm that does not incorporate domain knowledge introduced with the NATs, and show that the synaptic updates performed by the DSP rules demonstrate more effective training performance relative to the HC algorithm.Comment: GECCO201

    Some Theorems for Feed Forward Neural Networks

    Full text link
    In this paper we introduce a new method which employs the concept of "Orientation Vectors" to train a feed forward neural network and suitable for problems where large dimensions are involved and the clusters are characteristically sparse. The new method is not NP hard as the problem size increases. We `derive' the method by starting from Kolmogrov's method and then relax some of the stringent conditions. We show for most classification problems three layers are sufficient and the network size depends on the number of clusters. We prove as the number of clusters increase from N to N+dN the number of processing elements in the first layer only increases by d(logN), and are proportional to the number of classes, and the method is not NP hard. Many examples are solved to demonstrate that the method of Orientation Vectors requires much less computational effort than Radial Basis Function methods and other techniques wherein distance computations are required, in fact the present method increases logarithmically with problem size compared to the Radial Basis Function method and the other methods which depend on distance computations e.g statistical methods where probabilistic distances are calculated. A practical method of applying the concept of Occum's razor to choose between two architectures which solve the same classification problem has been described. The ramifications of the above findings on the field of Deep Learning have also been briefly investigated and we have found that it directly leads to the existence of certain types of NN architectures which can be used as a "mapping engine", which has the property of "invertibility", thus improving the prospect of their deployment for solving problems involving Deep Learning and hierarchical classification. The latter possibility has a lot of future scope in the areas of machine learning and cloud computing.Comment: 15 pages 13 figure
    • …
    corecore