18,001 research outputs found

    Hybridizing Non-dominated Sorting Algorithms: Divide-and-Conquer Meets Best Order Sort

    Full text link
    Many production-grade algorithms benefit from combining an asymptotically efficient algorithm for solving big problem instances, by splitting them into smaller ones, and an asymptotically inefficient algorithm with a very small implementation constant for solving small subproblems. A well-known example is stable sorting, where mergesort is often combined with insertion sort to achieve a constant but noticeable speed-up. We apply this idea to non-dominated sorting. Namely, we combine the divide-and-conquer algorithm, which has the currently best known asymptotic runtime of O(N(logN)M1)O(N (\log N)^{M - 1}), with the Best Order Sort algorithm, which has the runtime of O(N2M)O(N^2 M) but demonstrates the best practical performance out of quadratic algorithms. Empirical evaluation shows that the hybrid's running time is typically not worse than of both original algorithms, while for large numbers of points it outperforms them by at least 20%. For smaller numbers of objectives, the speedup can be as large as four times.Comment: A two-page abstract of this paper will appear in the proceedings companion of the 2017 Genetic and Evolutionary Computation Conference (GECCO 2017

    Connectionist Theory Refinement: Genetically Searching the Space of Network Topologies

    Full text link
    An algorithm that learns from a set of examples should ideally be able to exploit the available resources of (a) abundant computing power and (b) domain-specific knowledge to improve its ability to generalize. Connectionist theory-refinement systems, which use background knowledge to select a neural network's topology and initial weights, have proven to be effective at exploiting domain-specific knowledge; however, most do not exploit available computing power. This weakness occurs because they lack the ability to refine the topology of the neural networks they produce, thereby limiting generalization, especially when given impoverished domain theories. We present the REGENT algorithm which uses (a) domain-specific knowledge to help create an initial population of knowledge-based neural networks and (b) genetic operators of crossover and mutation (specifically designed for knowledge-based networks) to continually search for better network topologies. Experiments on three real-world domains indicate that our new algorithm is able to significantly increase generalization compared to a standard connectionist theory-refinement system, as well as our previous algorithm for growing knowledge-based networks.Comment: See http://www.jair.org/ for any accompanying file

    Renyi’s entropy based multilevel thresholding using a novel meta-heuristics algorithm

    Get PDF
    Multi-level image thresholding is the most direct and effective method for image segmentation, which is a key step for image analysis and computer vision, however, as the number of threshold values increases, exhaustive search does not work efficiently and effectively and evolutionary algorithms often fall into a local optimal solution. In the paper, a meta-heuristics algorithm based on the breeding mechanism of Chinese hybrid rice is proposed to seek the optimal multi-level thresholds for image segmentation and Renyi’s entropy is utilized as the fitness function. Experiments have been run on four scanning electron microscope images of cement and four standard images, moreover, it is compared with other six classical and novel evolutionary algorithms: genetic algorithm, particle swarm optimization algorithm, differential evolution algorithm, ant lion optimization algorithm, whale optimization algorithm, and salp swarm algorithm. Meanwhile, some indicators, including the average fitness values, standard deviation, peak signal to noise ratio, and structural similarity index are used as evaluation criteria in the experiments. The experimental results show that the proposed method prevails over the other algorithms involved in the paper on most indicators and it can segment cement scanning electron microscope image effectively

    Optimizing production scheduling of steel plate hot rolling for economic load dispatch under time-of-use electricity pricing

    Get PDF
    Time-of-Use (TOU) electricity pricing provides an opportunity for industrial users to cut electricity costs. Although many methods for Economic Load Dispatch (ELD) under TOU pricing in continuous industrial processing have been proposed, there are still difficulties in batch-type processing since power load units are not directly adjustable and nonlinearly depend on production planning and scheduling. In this paper, for hot rolling, a typical batch-type and energy intensive process in steel industry, a production scheduling optimization model for ELD is proposed under TOU pricing, in which the objective is to minimize electricity costs while considering penalties caused by jumps between adjacent slabs. A NSGA-II based multi-objective production scheduling algorithm is developed to obtain Pareto-optimal solutions, and then TOPSIS based multi-criteria decision-making is performed to recommend an optimal solution to facilitate filed operation. Experimental results and analyses show that the proposed method cuts electricity costs in production, especially in case of allowance for penalty score increase in a certain range. Further analyses show that the proposed method has effect on peak load regulation of power grid.Comment: 13 pages, 6 figures, 4 table

    Optimized pulses for the control of uncertain qubits

    Full text link
    Constructing high-fidelity control fields that are robust to control, system, and/or surrounding environment uncertainties is a crucial objective for quantum information processing. Using the two-state Landau-Zener model for illustrative simulations of a controlled qubit, we generate optimal controls for \pi/2- and \pi-pulses, and investigate their inherent robustness to uncertainty in the magnitude of the drift Hamiltonian. Next, we construct a quantum-control protocol to improve system-drift robustness by combining environment-decoupling pulse criteria and optimal control theory for unitary operations. By perturbatively expanding the unitary time-evolution operator for an open quantum system, previous analysis of environment-decoupling control pulses has calculated explicit control-field criteria to suppress environment-induced errors up to (but not including) third order from \pi/2- and \pi-pulses. We systematically integrate this criteria with optimal control theory, incorporating an estimate of the uncertain parameter, to produce improvements in gate fidelity and robustness, demonstrated via a numerical example based on double quantum dot qubits. For the qubit model used in this work, post facto analysis of the resulting controls suggests that realistic control-field fluctuations and noise may contribute just as significantly to gate errors as system and environment fluctuations.Comment: 38 pages, 15 figures, RevTeX 4.1, minor modifications to the previous versio
    corecore