80,938 research outputs found

    Designing Competent Mutation Operators via Probabilistic Model Building of Neighborhoods

    Full text link
    This paper presents a competent selectomutative genetic algorithm (GA), that adapts linkage and solves hard problems quickly, reliably, and accurately. A probabilistic model building process is used to automatically identify key building blocks (BBs) of the search problem. The mutation operator uses the probabilistic model of linkage groups to find the best among competing building blocks. The competent selectomutative GA successfully solves additively separable problems of bounded difficulty, requiring only subquadratic number of function evaluations. The results show that for additively separable problems the probabilistic model building BB-wise mutation scales as O(2^km^{1.5}), and requires O(k^{0.5}logm) less function evaluations than its selectorecombinative counterpart, confirming theoretical results reported elsewhere (Sastry & Goldberg, 2004).Comment: Genetic and Evolutionary Computation Conference (GECCO-2004

    Effective linkage learning using low-order statistics and clustering

    Full text link
    The adoption of probabilistic models for the best individuals found so far is a powerful approach for evolutionary computation. Increasingly more complex models have been used by estimation of distribution algorithms (EDAs), which often result better effectiveness on finding the global optima for hard optimization problems. Supervised and unsupervised learning of Bayesian networks are very effective options, since those models are able to capture interactions of high order among the variables of a problem. Diversity preservation, through niching techniques, has also shown to be very important to allow the identification of the problem structure as much as for keeping several global optima. Recently, clustering was evaluated as an effective niching technique for EDAs, but the performance of simpler low-order EDAs was not shown to be much improved by clustering, except for some simple multimodal problems. This work proposes and evaluates a combination operator guided by a measure from information theory which allows a clustered low-order EDA to effectively solve a comprehensive range of benchmark optimization problems.Comment: Submitted to IEEE Transactions on Evolutionary Computatio

    Let's Get Ready to Rumble: Crossover Versus Mutation Head to Head

    Full text link
    This paper analyzes the relative advantages between crossover and mutation on a class of deterministic and stochastic additively separable problems. This study assumes that the recombination and mutation operators have the knowledge of the building blocks (BBs) and effectively exchange or search among competing BBs. Facetwise models of convergence time and population sizing have been used to determine the scalability of each algorithm. The analysis shows that for additively separable deterministic problems, the BB-wise mutation is more efficient than crossover, while the crossover outperforms the mutation on additively separable problems perturbed with additive Gaussian noise. The results show that the speed-up of using BB-wise mutation on deterministic problems is O(k^{0.5}logm), where k is the BB size, and m is the number of BBs. Likewise, the speed-up of using crossover on stochastic problems with fixed noise variance is O(mk^{0.5}log m).Comment: Genetic and Evolutionary Computation Conference (GECCO-2004

    Efficiency Enhancement of Probabilistic Model Building Genetic Algorithms

    Full text link
    This paper presents two different efficiency-enhancement techniques for probabilistic model building genetic algorithms. The first technique proposes the use of a mutation operator which performs local search in the sub-solution neighborhood identified through the probabilistic model. The second technique proposes building and using an internal probabilistic model of the fitness along with the probabilistic model of variable interactions. The fitness values of some offspring are estimated using the probabilistic model, thereby avoiding computationally expensive function evaluations. The scalability of the aforementioned techniques are analyzed using facetwise models for convergence time and population sizing. The speed-up obtained by each of the methods is predicted and verified with empirical results. The results show that for additively separable problems the competent mutation operator requires O(k 0.5 logm)--where k is the building-block size, and m is the number of building blocks--less function evaluations than its selectorecombinative counterpart. The results also show that the use of an internal probabilistic fitness model reduces the required number of function evaluations to as low as 1-10% and yields a speed-up of 2--50.Comment: Optimization by Building and Using Probabilistic Models. Workshop at the 2004 Genetic and Evolutionary Computation Conferenc

    A Fast Propagation Method for the Helmholtz equation

    Full text link
    A fast method is proposed for solving the high frequency Helmholtz equation. The building block of the new fast method is an overlapping source transfer domain decomposition method for layered medium, which is an extension of the source transfer domain decomposition method proposed by Chen and Xiang \cite{Chen2013a,Chen2013b}. The new fast method contains a setup phase and a solving phase. In the setup phase, the computation domain is decomposed hierarchically into many subdomains of different levels, and the mapping from incident traces to field traces on all the subdomains are set up bottom-up. In the solving phase, first on the bottom level, the local problem on the subdomains with restricted source is solved, then the wave propagates on the boundaries of all the subdomains bottom-up, at last the local solutions on all the subdomains are summed up top-down. The total computation cost of the new fast method is O(n32logn)O(n^{\frac{3}{2}} \log n) for 2D problem. Numerical experiments shows that with the new fast method, Helmholtz equations with half billion unknowns could be solved efficiently on massively parallel machines.Comment: 20 pages, 11 figure

    On the honeycomb conjecture and the Kepler problem

    Full text link
    This paper views the honeycomb conjecture and the Kepler problem essentially as extreme value problems and solves them by partitioning 2-space and 3-space into building blocks and determining those blocks that have the universal extreme values that one needs. More precisely, we proved two results. First, we proved that the regular hexagons are the only 2-dim blocks that have unit area and the least perimeter (or contain a unit circle and have the least area) that tile the plane. Secondly, we proved that the rhombic dodecahedron and the rhombus-isosceles trapezoidal dodecahedron are the only two 3-dim blocks that contain a unit sphere and have the least volume that can fill 3-space without either overlapping or leaving gaps. Finally, the Kepler conjecture can also be proved to be true by introducing the concept of the minimum 2-dim and 3-dim Kepler building blocks.Comment: 20 pages,14 figures Note: the title has been change

    A Random Sample Partition Data Model for Big Data Analysis

    Full text link
    Big data sets must be carefully partitioned into statistically similar data subsets that can be used as representative samples for big data analysis tasks. In this paper, we propose the random sample partition (RSP) data model to represent a big data set as a set of non-overlapping data subsets, called RSP data blocks, where each RSP data block has a probability distribution similar to the whole big data set. Under this data model, efficient block level sampling is used to randomly select RSP data blocks, replacing expensive record level sampling to select sample data from a big distributed data set on a computing cluster. We show how RSP data blocks can be employed to estimate statistics of a big data set and build models which are equivalent to those built from the whole big data set. In this approach, analysis of a big data set becomes analysis of few RSP data blocks which have been generated in advance on the computing cluster. Therefore, the new method for data analysis based on RSP data blocks is scalable to big data.Comment: 9 pages, 7 figure

    Randomized Block Coordinate Descent for Online and Stochastic Optimization

    Full text link
    Two types of low cost-per-iteration gradient descent methods have been extensively studied in parallel. One is online or stochastic gradient descent (OGD/SGD), and the other is randomzied coordinate descent (RBCD). In this paper, we combine the two types of methods together and propose online randomized block coordinate descent (ORBCD). At each iteration, ORBCD only computes the partial gradient of one block coordinate of one mini-batch samples. ORBCD is well suited for the composite minimization problem where one function is the average of the losses of a large number of samples and the other is a simple regularizer defined on high dimensional variables. We show that the iteration complexity of ORBCD has the same order as OGD or SGD. For strongly convex functions, by reducing the variance of stochastic gradients, we show that ORBCD can converge at a geometric rate in expectation, matching the convergence rate of SGD with variance reduction and RBCD.Comment: The errors in the proof of ORBCD with variance reduction have been correcte

    Purely algebraic domain decomposition methods for the incompressible Navier-Stokes equations

    Full text link
    In the context of non overlapping domain decomposition methods, several algebraic approximations of the Dirichlet-to-Neumann (DtN) map are proposed in [F. X. Roux, et. al. Algebraic approximation of Dirichlet- to-Neumann maps for the equations of linear elasticity, Comput. Methods Appl. Mech. Engrg., 195, 2006, 3742-3759]. For the case of non overlapping domains, approximation to the DtN are analogous to the approximation of the Schur complements in the incomplete multilevel block factorization. In this work, several original and purely algebraic (based on graph of the matrix) domain decomposition techniques are investigated for steady state incompressible Navier-Stokes equation defined on uniform and stretched grid for low viscosity. Moreover, the methods proposed are highly parallel during both setup and application phase. Spectral and numerical analysis of the methods are also presented.Comment: Introduction rewritten, Comparison with state-of-art methods added, figure on overlapping case added, Complete algorithms added to build and solve with the preconditioners, Tests with Reynold number 3000 added, some observations with block jacobi method in analysis sectio

    A Massively Parallel Algebraic Multigrid Preconditioner based on Aggregation for Elliptic Problems with Heterogeneous Coefficients

    Full text link
    This paper describes a massively parallel algebraic multigrid method based on non-smoothed aggregation. It is especially suited for solving heterogeneous elliptic problems as it uses a greedy heuristic algorithm for the aggregation that detects changes in the coefficients and prevents aggregation across them. Using decoupled aggregation on each process with data agglomeration onto fewer processes on the coarse level, it weakly scales well in terms of both total time to solution and time per iteration to nearly 300,000 cores. Because of simple piecewise constant interpolation between the levels, its memory consumption is low and allows solving problems with more than 100,000,000,000 degrees of freedom.Comment: 22 pages, 1 figur
    corecore