1,834,476 research outputs found

    Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions

    Full text link
    A fundamental aspect of learning in biological neural networks is the plasticity property which allows them to modify their configurations during their lifetime. Hebbian learning is a biologically plausible mechanism for modeling the plasticity property in artificial neural networks (ANNs), based on the local interactions of neurons. However, the emergence of a coherent global learning behavior from local Hebbian plasticity rules is not very well understood. The goal of this work is to discover interpretable local Hebbian learning rules that can provide autonomous global learning. To achieve this, we use a discrete representation to encode the learning rules in a finite search space. These rules are then used to perform synaptic changes, based on the local interactions of the neurons. We employ genetic algorithms to optimize these rules to allow learning on two separate tasks (a foraging and a prey-predator scenario) in online lifetime learning settings. The resulting evolved rules converged into a set of well-defined interpretable types, that are thoroughly discussed. Notably, the performance of these rules, while adapting the ANNs during the learning tasks, is comparable to that of offline learning methods such as hill climbing.Comment: Evolutionary Computation Journa

    Analysis of ensemble learning using simple perceptrons based on online learning theory

    Full text link
    Ensemble learning of KK nonlinear perceptrons, which determine their outputs by sign functions, is discussed within the framework of online learning and statistical mechanics. One purpose of statistical learning theory is to theoretically obtain the generalization error. This paper shows that ensemble generalization error can be calculated by using two order parameters, that is, the similarity between a teacher and a student, and the similarity among students. The differential equations that describe the dynamical behaviors of these order parameters are derived in the case of general learning rules. The concrete forms of these differential equations are derived analytically in the cases of three well-known rules: Hebbian learning, perceptron learning and AdaTron learning. Ensemble generalization errors of these three rules are calculated by using the results determined by solving their differential equations. As a result, these three rules show different characteristics in their affinity for ensemble learning, that is ``maintaining variety among students." Results show that AdaTron learning is superior to the other two rules with respect to that affinity.Comment: 30 pages, 17 figure

    Learning about monetary policy rules

    Get PDF
    We study macroeconomic systems with forward-looking private sector agents and a monetary authority that is trying to control the economy through the use of a linear policy feedback rule. A typical finding in the burgeoning literature in this area is that policymakers should be relatively aggressive in responding to available information about the macroeconomy (more aggressive than they appear to be in reality). A natural question to ask about this result is whether policy responses which are too aggressive might actually destabilize the economy. We use stability under recursive learning a la Evans and Honkapohja 1999a as a criterion for evaluating monetary policy rules in this context. We find that considering learning can substantially alter the evaluation of model economies in some situations. We also find that a certain type of rule is robustly associated with both determinacy and learnability. This is an active, Taylor-type rule, with only a small positive reaction to variables other than inflation.Monetary policy ; Macroeconomics ; Taylor's rule

    Simultaneous Evolution of Learning Rules and Strategies

    Get PDF
    We study a model of local evolution. Players are located on a network and play games agains their neighbors. Players are characterized by three properties: (1) The stage game strategies they use agains their neighbors. (2) The repeated game strategy that determines the former. (3) A learning rule that selects the repeated game strategy, on the basis of the player's own and the neighbors' payoff and repeated game strategy. The dynamics that specifies learning rules is given exogenously. Players sample their neighbors' learning rules and their respective payoff. Then they construct a model that related parameters of the learning rules to payoffs. Given this model they choose an optimal learning rule. We find that under this dynamics learning rules emerge in the long run which behave deterministically but which are asymmetric in the sense that while learning they put more weight on the learning players experience then on the observed players one. Nevertheless stage game behavior under these learning rules is similar to behavior using symmetric learning rules.Evolutionary Game Theory, Networks.

    Expedient and Monotone Learning Rules

    Get PDF
    This paper considers learning rules for environments in which little prior and feedback information is available to the decision-maker. Two properties of such learning rules, absolute expediency and monotonicity, are studied. The paper provides some necessary and some sufficient conditions for these properties. A number of examples show that there is quite a large variety of learning rules which have these properties. It is also shown that all learning rules that have these properties are, in some sense, related to replicator dynamics of evolutionary game theory.Absolute expediency, monotonicity, learning rule, decision making

    Learning to Customize Network Security Rules

    Full text link
    Security is a major concern for organizations who wish to leverage cloud computing. In order to reduce security vulnerabilities, public cloud providers offer firewall functionalities. When properly configured, a firewall protects cloud networks from cyber-attacks. However, proper firewall configuration requires intimate knowledge of the protected system, high expertise and on-going maintenance. As a result, many organizations do not use firewalls effectively, leaving their cloud resources vulnerable. In this paper, we present a novel supervised learning method, and prototype, which compute recommendations for firewall rules. Recommendations are based on sampled network traffic meta-data (NetFlow) collected from a public cloud provider. Labels are extracted from firewall configurations deemed to be authored by experts. NetFlow is collected from network routers, avoiding expensive collection from cloud VMs, as well as relieving privacy concerns. The proposed method captures network routines and dependencies between resources and firewall configuration. The method predicts IPs to be allowed by the firewall. A grouping algorithm is subsequently used to generate a manageable number of IP ranges. Each range is a parameter for a firewall rule. We present results of experiments on real data, showing ROC AUC of 0.92, compared to 0.58 for an unsupervised baseline. The results prove the hypothesis that firewall rules can be automatically generated based on router data, and that an automated method can be effective in blocking a high percentage of malicious traffic.Comment: 5 pages, 5 figures, one tabl

    Hierarchical meta-rules for scalable meta-learning

    Get PDF
    The Pairwise Meta-Rules (PMR) method proposed in [18] has been shown to improve the predictive performances of several metalearning algorithms for the algorithm ranking problem. Given m target objects (e.g., algorithms), the training complexity of the PMR method with respect to m is quadratic: (formula presented). This is usually not a problem when m is moderate, such as when ranking 20 different learning algorithms. However, for problems with a much larger m, such as the meta-learning-based parameter ranking problem, where m can be 100+, the PMR method is less efficient. In this paper, we propose a novel method named Hierarchical Meta-Rules (HMR), which is based on the theory of orthogonal contrasts. The proposed HMR method has a linear training complexity with respect to m, providing a way of dealing with a large number of objects that the PMR method cannot handle efficiently. Our experimental results demonstrate the benefit of the new method in the context of meta-learning
    corecore