327,771 research outputs found

    Active Bayesian Optimization: Minimizing Minimizer Entropy

    Full text link
    The ultimate goal of optimization is to find the minimizer of a target function.However, typical criteria for active optimization often ignore the uncertainty about the minimizer. We propose a novel criterion for global optimization and an associated sequential active learning strategy using Gaussian processes.Our criterion is the reduction of uncertainty in the posterior distribution of the function minimizer. It can also flexibly incorporate multiple global minimizers. We implement a tractable approximation of the criterion and demonstrate that it obtains the global minimizer accurately compared to conventional Bayesian optimization criteria

    Efficiency of attack strategies on complex model and real-world networks

    Full text link
    We investigated the efficiency of attack strategies to network nodes when targeting several complex model and real-world networks. We tested 5 attack strategies, 3 of which were introduced in this work for the first time, to attack 3 model (Erdos and Renyi, Barabasi and Albert preferential attachment network, and scale-free network configuration models) and 3 real networks (Gnutella peer-to-peer network, email network of the University of Rovira i Virgili, and immunoglobulin interaction network). Nodes were removed sequentially according to the importance criterion defined by the attack strategy. We used the size of the largest connected component (LCC) as a measure of network damage. We found that the efficiency of attack strategies (fraction of nodes to be deleted for a given reduction of LCC size) depends on the topology of the network, although attacks based on the number of connections of a node and betweenness centrality were often the most efficient strategies. Sequential deletion of nodes in decreasing order of betweenness centrality was the most efficient attack strategy when targeting real-world networks. In particular for networks with power-law degree distribution, we observed that most efficient strategy change during the sequential removal of nodes.Comment: 18 pages, 4 figure

    Normalisation for Dynamic Pattern Calculi

    Get PDF
    The Pure Pattern Calculus (PPC) extends the lambda-calculus, as well as the family of algebraic pattern calculi, with first-class patterns; that is, patterns can be passed as arguments, evaluated and returned as results. The notion of matching failure of the PPC not only provides a mechanism to define functions by pattern matching on cases but also supplies PPC with parallel-or-like, non-sequential behaviour. Therefore, devising normalising strategies for PPC to obtain well-behaved implementations turns out to be challenging. This paper focuses on normalising reduction strategies for PPC. We define a (multistep) strategy and show that it is normalising. The strategy generalises the leftmost-outermost strategy for lambda-calculus and is strictly finer than parallel-outermost. The normalisation proof is based on the notion of necessary set of redexes, a generalisation of the notion of needed redex encompassing non-sequential reduction systems

    Electromagnetic device design based on RBF models and two new sequential optimization strategies

    Full text link
    We present two new strategies for sequential optimization method (SOM) to deal with the optimization design problems of electromagnetic devices. One is a new space reduction strategy; the other is model selection strategy. Meanwhile, radial basis function (RBF) and compactly supported RBF models are investigated to extend the applied model types for SOM. Thereafter, Monte Carlo method is employed to demonstrate the efficiency and superiority of the new space reduction strategy. Five commonly used approximate models are considered for the discussion of model selection strategy. Furthermore, by two TEAM benchmark examples, we can see that SOM with the proposed new strategies and models can significantly speed the optimization design process, and the efficiency of SOM depends a little on the types of approximate models. © 2006 IEEE

    Smoothed Efficient Algorithms and Reductions for Network Coordination Games

    Get PDF
    Worst-case hardness results for most equilibrium computation problems have raised the need for beyond-worst-case analysis. To this end, we study the smoothed complexity of finding pure Nash equilibria in Network Coordination Games, a PLS-complete problem in the worst case. This is a potential game where the sequential-better-response algorithm is known to converge to a pure NE, albeit in exponential time. First, we prove polynomial (resp. quasi-polynomial) smoothed complexity when the underlying game graph is a complete (resp. arbitrary) graph, and every player has constantly many strategies. We note that the complete graph case is reminiscent of perturbing all parameters, a common assumption in most known smoothed analysis results. Second, we define a notion of smoothness-preserving reduction among search problems, and obtain reductions from 22-strategy network coordination games to local-max-cut, and from kk-strategy games (with arbitrary kk) to local-max-cut up to two flips. The former together with the recent result of [BCC18] gives an alternate O(n8)O(n^8)-time smoothed algorithm for the 22-strategy case. This notion of reduction allows for the extension of smoothed efficient algorithms from one problem to another. For the first set of results, we develop techniques to bound the probability that an (adversarial) better-response sequence makes slow improvements on the potential. Our approach combines and generalizes the local-max-cut approaches of [ER14,ABPW17] to handle the multi-strategy case: it requires a careful definition of the matrix which captures the increase in potential, a tighter union bound on adversarial sequences, and balancing it with good enough rank bounds. We believe that the approach and notions developed herein could be of interest in addressing the smoothed complexity of other potential and/or congestion games

    Decision making under time pressure: an independent test of sequential sampling models

    No full text
    Choice probability and choice response time data from a risk-taking decision-making task were compared with predictions made by a sequential sampling model. The behavioral data, consistent with the model, showed that participants were less likely to take an action as risk levels increased, and that time pressure did not have a uniform effect on choice probability. Under time pressure, participants were more conservative at the lower risk levels but were more prone to take risks at the higher levels of risk. This crossover interaction reflected a reduction of the threshold within a single decision strategy rather than a switching of decision strategies. Response time data, as predicted by the model, showed that participants took more time to make decisions at the moderate risk levels and that time pressure reduced response time across all risk levels, but particularly at the those risk levels that took longer time with no pressure. Finally, response time data were used to rule out the hypothesis that time pressure effects could be explained by a fast-guess strategy
    • …
    corecore