54,485 research outputs found

    Integrating Evolutionary Computation with Neural Networks

    Get PDF
    There is a tremendous interest in the development of the evolutionary computation techniques as they are well suited to deal with optimization of functions containing a large number of variables. This paper presents a brief review of evolutionary computing techniques. It also discusses briefly the hybridization of evolutionary computation and neural networks and presents a solution of a classical problem using neural computing and evolutionary computing technique

    Natural evolution strategies and variational Monte Carlo

    Full text link
    A notion of quantum natural evolution strategies is introduced, which provides a geometric synthesis of a number of known quantum/classical algorithms for performing classical black-box optimization. Recent work of Gomes et al. [2019] on heuristic combinatorial optimization using neural quantum states is pedagogically reviewed in this context, emphasizing the connection with natural evolution strategies. The algorithmic framework is illustrated for approximate combinatorial optimization problems, and a systematic strategy is found for improving the approximation ratios. In particular it is found that natural evolution strategies can achieve approximation ratios competitive with widely used heuristic algorithms for Max-Cut, at the expense of increased computation time

    Incorporating characteristics of human creativity into an evolutionary art algorithm (journal article)

    Get PDF
    A perceived limitation of evolutionary art and design algorithms is that they rely on human intervention; the artist selects the most aesthetically pleasing variants of one generation to produce the next. This paper discusses how computer generated art and design can become more creatively human-like with respect to both process and outcome. As an example of a step in this direction, we present an algorithm that overcomes the above limitation by employing an automatic fitness function. The goal is to evolve abstract portraits of Darwin, using our 2nd generation fitness function which rewards genomes that not just produce a likeness of Darwin but exhibit certain strategies characteristic of human artists. We note that in human creativity, change is less choosing amongst randomly generated variants and more capitalizing on the associative structure of a conceptual network to hone in on a vision. We discuss how to achieve this fluidity algorithmically

    BigDataBench: a Big Data Benchmark Suite from Internet Services

    Full text link
    As architecture, systems, and data management communities pay greater attention to innovative big data systems and architectures, the pressure of benchmarking and evaluating these systems rises. Considering the broad use of big data systems, big data benchmarks must include diversity of data and workloads. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purposes mentioned above. This paper presents our joint research efforts on this issue with several industrial partners. Our big data benchmark suite BigDataBench not only covers broad application scenarios, but also includes diverse and representative data sets. BigDataBench is publicly available from http://prof.ict.ac.cn/BigDataBench . Also, we comprehensively characterize 19 big data workloads included in BigDataBench with varying data inputs. On a typical state-of-practice processor, Intel Xeon E5645, we have the following observations: First, in comparison with the traditional benchmarks: including PARSEC, HPCC, and SPECCPU, big data applications have very low operation intensity; Second, the volume of data input has non-negligible impact on micro-architecture characteristics, which may impose challenges for simulation-based big data architecture research; Last but not least, corroborating the observations in CloudSuite and DCBench (which use smaller data inputs), we find that the numbers of L1 instruction cache misses per 1000 instructions of the big data applications are higher than in the traditional benchmarks; also, we find that L3 caches are effective for the big data applications, corroborating the observation in DCBench.Comment: 12 pages, 6 figures, The 20th IEEE International Symposium On High Performance Computer Architecture (HPCA-2014), February 15-19, 2014, Orlando, Florida, US

    On the relationship between continuous- and discrete-time quantum walk

    Full text link
    Quantum walk is one of the main tools for quantum algorithms. Defined by analogy to classical random walk, a quantum walk is a time-homogeneous quantum process on a graph. Both random and quantum walks can be defined either in continuous or discrete time. But whereas a continuous-time random walk can be obtained as the limit of a sequence of discrete-time random walks, the two types of quantum walk appear fundamentally different, owing to the need for extra degrees of freedom in the discrete-time case. In this article, I describe a precise correspondence between continuous- and discrete-time quantum walks on arbitrary graphs. Using this correspondence, I show that continuous-time quantum walk can be obtained as an appropriate limit of discrete-time quantum walks. The correspondence also leads to a new technique for simulating Hamiltonian dynamics, giving efficient simulations even in cases where the Hamiltonian is not sparse. The complexity of the simulation is linear in the total evolution time, an improvement over simulations based on high-order approximations of the Lie product formula. As applications, I describe a continuous-time quantum walk algorithm for element distinctness and show how to optimally simulate continuous-time query algorithms of a certain form in the conventional quantum query model. Finally, I discuss limitations of the method for simulating Hamiltonians with negative matrix elements, and present two problems that motivate attempting to circumvent these limitations.Comment: 22 pages. v2: improved presentation, new section on Hamiltonian oracles; v3: published version, with improved analysis of phase estimatio
    • 

    corecore