406 research outputs found

    Unbiased Black-Box Complexities of Jump Functions

    Full text link
    We analyze the unbiased black-box complexity of jump functions with small, medium, and large sizes of the fitness plateau surrounding the optimal solution. Among other results, we show that when the jump size is (1/2ε)n(1/2 - \varepsilon)n, that is, only a small constant fraction of the fitness values is visible, then the unbiased black-box complexities for arities 33 and higher are of the same order as those for the simple \textsc{OneMax} function. Even for the extreme jump function, in which all but the two fitness values n/2n/2 and nn are blanked out, polynomial-time mutation-based (i.e., unary unbiased) black-box optimization algorithms exist. This is quite surprising given that for the extreme jump function almost the whole search space (all but a Θ(n1/2)\Theta(n^{-1/2}) fraction) is a plateau of constant fitness. To prove these results, we introduce new tools for the analysis of unbiased black-box complexities, for example, selecting the new parent individual not by comparing the fitnesses of the competing search points, but also by taking into account the (empirical) expected fitnesses of their offspring.Comment: This paper is based on results presented in the conference versions [GECCO 2011] and [GECCO 2014

    Better Fixed-Arity Unbiased Black-Box Algorithms

    Full text link
    In their GECCO'12 paper, Doerr and Doerr proved that the kk-ary unbiased black-box complexity of OneMax on nn bits is O(n/k)O(n/k) for 2kO(logn)2\le k\le O(\log n). We propose an alternative strategy for achieving this unbiased black-box complexity when 3klog2n3\le k\le\log_2 n. While it is based on the same idea of block-wise optimization, it uses kk-ary unbiased operators in a different way. For each block of size 2k112^{k-1}-1 we set up, in O(k)O(k) queries, a virtual coordinate system, which enables us to use an arbitrary unrestricted algorithm to optimize this block. This is possible because this coordinate system introduces a bijection between unrestricted queries and a subset of kk-ary unbiased operators. We note that this technique does not depend on OneMax being solved and can be used in more general contexts. This together constitutes an algorithm which is conceptually simpler than the one by Doerr and Doerr, and at the same time achieves better constant factors in the asymptotic notation. Our algorithm works in (2+o(1))n/(k1)(2+o(1))\cdot n/(k-1), where o(1)o(1) relates to kk. Our experimental evaluation of this algorithm shows its efficiency already for 3k63\le k\le6.Comment: An extended abstract will appear at GECCO'1

    Better Fixed-Arity Unbiased Black-Box Algorithms

    Full text link
    In their GECCO'12 paper, Doerr and Doerr proved that the kk-ary unbiased black-box complexity of OneMax on nn bits is O(n/k)O(n/k) for 2kO(logn)2\le k\le O(\log n). We propose an alternative strategy for achieving this unbiased black-box complexity when 3klog2n3\le k\le\log_2 n. While it is based on the same idea of block-wise optimization, it uses kk-ary unbiased operators in a different way. For each block of size 2k112^{k-1}-1 we set up, in O(k)O(k) queries, a virtual coordinate system, which enables us to use an arbitrary unrestricted algorithm to optimize this block. This is possible because this coordinate system introduces a bijection between unrestricted queries and a subset of kk-ary unbiased operators. We note that this technique does not depend on OneMax being solved and can be used in more general contexts. This together constitutes an algorithm which is conceptually simpler than the one by Doerr and Doerr, and at the same time achieves better constant factors in the asymptotic notation. Our algorithm works in (2+o(1))n/(k1)(2+o(1))\cdot n/(k-1), where o(1)o(1) relates to kk. Our experimental evaluation of this algorithm shows its efficiency already for 3k63\le k\le6.Comment: An extended abstract will appear at GECCO'1

    Quantum-classical generative models for machine learning

    Get PDF
    The combination of quantum and classical computational resources towards more effective algorithms is one of the most promising research directions in computer science. In such a hybrid framework, existing quantum computers can be used to their fullest extent and for practical applications. Generative modeling is one of the applications that could benefit the most, either by speeding up the underlying sampling methods or by unlocking more general models. In this work, we design a number of hybrid generative models and validate them on real hardware and datasets. The quantum-assisted Boltzmann machine is trained to generate realistic artificial images on quantum annealers. Several challenges in state-of-the-art annealers shall be overcome before one can assess their actual performance. We attack some of the most pressing challenges such as the sparse qubit-to-qubit connectivity, the unknown effective-temperature, and the noise on the control parameters. In order to handle datasets of realistic size and complexity, we include latent variables and obtain a more general model called the quantum-assisted Helmholtz machine. In the context of gate-based computers, the quantum circuit Born machine is trained to encode a target probability distribution in the wavefunction of a set of qubits. We implement this model on a trapped ion computer using low-depth circuits and native gates. We use the generative modeling performance on the canonical Bars-and-Stripes dataset to design a benchmark for hybrid systems. It is reasonable to expect that quantum data, i.e., datasets of wavefunctions, will become available in the future. We derive a quantum generative adversarial network that works with quantum data. Here, two circuits are optimized in tandem: one tries to generate suitable quantum states, the other tries to distinguish between target and generated states

    Cities Awash in a Sea of Governments: How Does Political Fragmentation Affect Cities and Their Regions?

    Get PDF
    Is political fragmentation within the metropolitan area and within central city government a cause of central city decline or just the benign evolution of governance? Advocates of regional governance consider political fragmentation, the number and types of governments in a metropolitan area, a causal factor in decline. However a multiplicity of governments offer individual households greater choice and variety, in other words fragmentation represents the will of the people. All metropolitan areas are fragmented to some degree and whether or not this is harmful to cities and their regions is the empirical question considered. Political explanations on the impact of fragmentation break out into two overarching groups. One school of thought argues that regions struggle and experience slow growth or decline because the problems of the central city act as an anchor pulling the region down, while the other school believes cities struggle due to competition from other governments in their metropolitan area for residents and economic investment. This dissertation seeks to test the long term effects of political fragmentation across metropolitan areas on region-wide segregation, population and own-source revenue in 100 central cities from 1950 through 2000. Political fragmentation is broken down into horizontal and vertical fragmentation, which considers the impact of geographically coterminous governments and jurisdictional overlap, and also includes internal fragmentation, which is the division of governing authority among elected officials. The results of the analyses show that horizontal fragmentation increases segregation across metropolitan areas and reduces the city\u27s share of regional population. Both vertical and horizontal fragmentation are shown to increase the own-source revenue of central cities, and evidence is presented that shows internal fragmentation also increases own-source revenue. Essentially city residents pay more in taxes living in cities with more elected officials, and are surrounded by higher numbers of government and jurisdictional overlap. Fragmentation at the metropolitan level is complex but it is clear that high levels can pose problems to both the city and its region. The implications of these results are thoughtfully analyzed and recommendations are made for future research

    Performing folk punk : agonistic performances of intersectionality

    Get PDF
    The overarching goal of this project is to argue that folk punk performances offer spaces where a listening audience is exposed to a radical and intersectional politics, and enable that audience to identify with those views. By considering the performances of Inky Skulls, Pussy Riot!, and Against Me!, this study looks to the ways in which these folk punk exemplars highlight elements of the radical politics of the American left and in the history of folk and punk music. In particular, this project considers the intersections of race and class, women and nonhuman animals, and queerness and anarchism, as intersecting points of ideological convergence. The secondary goals of this project are two-fold. The first aim is to articulate a performative approach to folk punk music, as a scene worthy of academic consideration. The second aim is to consider the ways in which my personal experiences at folk punk shows highlight the idiosyncratic and utopian ways in which small performatives in the genre shape the identities of audience members and fans

    Neural Networks and the Natural Gradient

    Get PDF
    Neural network training algorithms have always suffered from the problem of local minima. The advent of natural gradient algorithms promised to overcome this shortcoming by finding better local minima. However, they require additional training parameters and computational overhead. By using a new formulation for the natural gradient, an algorithm is described that uses less memory and processing time than previous algorithms with comparable performance

    A Unified Framework for Gradient-based Hyperparameter Optimization and Meta-learning

    Get PDF
    Machine learning algorithms and systems are progressively becoming part of our societies, leading to a growing need of building a vast multitude of accurate, reliable and interpretable models which should possibly exploit similarities among tasks. Automating segments of machine learning itself seems to be a natural step to undertake to deliver increasingly capable systems able to perform well in both the big-data and the few-shot learning regimes. Hyperparameter optimization (HPO) and meta-learning (MTL) constitute two building blocks of this growing effort. We explore these two topics under a unifying perspective, presenting a mathematical framework linked to bilevel programming that captures existing similarities and translates into procedures of practical interest rooted in algorithmic differentiation. We discuss the derivation, applicability and computational complexity of these methods and establish several approximation properties for a class of objective functions of the underlying bilevel programs. In HPO, these algorithms generalize and extend previous work on gradient-based methods. In MTL, the resulting framework subsumes classic and emerging strategies and provides a starting basis from which to build and analyze novel techniques. A series of examples and numerical simulations offer insight and highlight some limitations of these approaches. Experiments on larger-scale problems show the potential gains of the proposed methods in real-world applications. Finally, we develop two extensions of the basic algorithms apt to optimize a class of discrete hyperparameters (graph edges) in an application to relational learning and to tune online learning rate schedules for training neural network models, an old but crucially important issue in machine learning
    corecore