298 research outputs found

    Cycles in the burnt pancake graphs

    Full text link
    The pancake graph PnP_n is the Cayley graph of the symmetric group SnS_n on nn elements generated by prefix reversals. PnP_n has been shown to have properties that makes it a useful network scheme for parallel processors. For example, it is (n1)(n-1)-regular, vertex-transitive, and one can embed cycles in it of length \ell with 6n!6\leq\ell\leq n!. The burnt pancake graph BPnBP_n, which is the Cayley graph of the group of signed permutations BnB_n using prefix reversals as generators, has similar properties. Indeed, BPnBP_n is nn-regular and vertex-transitive. In this paper, we show that BPnBP_n has every cycle of length \ell with 82nn!8\leq\ell\leq 2^n n!. The proof given is a constructive one that utilizes the recursive structure of BPnBP_n. We also present a complete characterization of all the 88-cycles in BPnBP_n for n2n \geq 2, which are the smallest cycles embeddable in BPnBP_n, by presenting their canonical forms as products of the prefix reversal generators.Comment: Added a reference, clarified some definitions, fixed some typos. 42 pages, 9 figures, 20 pages of appendice

    Average number of flips in pancake sorting

    Get PDF
    We are given a stack of pancakes of different sizes and the only allowed operation is to take several pancakes from top and flip them. The unburnt version requires the pancakes to be sorted by their sizes at the end, while in the burnt version they additionally need to be oriented burnt-side down. We present an algorithm with the average number of flips, needed to sort a stack of n burnt pancakes, equal to 7n/4+O(1) and a randomized algorithm for the unburnt version with at most 17n/12+O(1) flips on average. In addition, we show that in the burnt version, the average number of flips of any algorithm is at least n+\Omega(n/log n) and conjecture that some algorithm can reach n+\Theta(n/log n). We also slightly increase the lower bound on g(n), the minimum number of flips needed to sort the worst stack of n burnt pancakes. This bound, together with the upper bound found by Heydari and Sudborough in 1997, gives the exact number of flips to sort the previously conjectured worst stack -I_n for n=3 mod 4 and n>=15. Finally we present exact values of f(n) up to n=19 and of g(n) up to n=17 and disprove a conjecture of Cohen and Blum by showing that the burnt stack -I_{15} is not the worst one for n=15.Comment: 21 pages, new computational results for unburnt pancakes (up to n=19

    Algorithmic and Statistical Perspectives on Large-Scale Data Analysis

    Full text link
    In recent years, ideas from statistics and scientific computing have begun to interact in increasingly sophisticated and fruitful ways with ideas from computer science and the theory of algorithms to aid in the development of improved worst-case algorithms that are useful for large-scale scientific and Internet data analysis problems. In this chapter, I will describe two recent examples---one having to do with selecting good columns or features from a (DNA Single Nucleotide Polymorphism) data matrix, and the other having to do with selecting good clusters or communities from a data graph (representing a social or information network)---that drew on ideas from both areas and that may serve as a model for exploiting complementary algorithmic and statistical perspectives in order to solve applied large-scale data analysis problems.Comment: 33 pages. To appear in Uwe Naumann and Olaf Schenk, editors, "Combinatorial Scientific Computing," Chapman and Hall/CRC Press, 201

    Simulation of the evolution of large scale structure elements with adaptive multigrid method

    Get PDF
    http://www.ester.ee/record=b1053241~S1*es

    Doctor of Philosophy

    Get PDF
    dissertationSea ice can be viewed as a composite material over multiple scales. On the smallest scale, sea ice is viewed as a two-phase composite of ice and brine. On the mesoscale, one may consider pancake ice and slush as a viscoelastic composite. On the larger scale, one may consider the mix of ice floes and water. With this view, a multitude of mathematical tools may be applied to develop novel models of physical sea ice processes. We model fluid and electrical transport viewing sea ice as a two-phase composite of ice and brine. We may then apply continuum percolation models to study critical behavior which we have experimentally confirmed. These percolation models suggest that the electrical conductivity and fluid permeability follow universal power law behavior as a function of brine volume fraction. We apply the results above for the electrical conductivity of sea ice to develop an inversion algorithm for surface impedance DC tomography. The algorithm retrieves both sea ice thickness and a layered stratigraphy of the sea ice resistivity. This is useful as resistivity carries information about the internal microstructure of the ice. We also apply network models to conductivity of sea ice and use some similar ideas to quantify the horizontal connectivity of melt ponds. On the larger scale, we study the problem of ocean wave dynamics in the marginal ice zone of the Arctic and Antarctic. We adopt the view that the ice and slush may be viewed as a viscoelastic layer atop an inviscid ocean. Models like these produce dispersion relations which describe wave propagation and attenuation into the ice pack. These dispersion relations depend on knowledge of the effective viscoelasticity of the ice/slush mix. This is a difficult parameter to measure in practice. To get around this, we apply homogenization theory to derive bounds on these parameters in the low frequency limit. This is accomplished through the derivation of a Stieltjes integral representation, involving a positive measure of a self-adjoint operator, for the effective elasticity tensor of the ice water composite. We have also developed a simplified wave equation for waves in the ice-water composite

    Application of HPC in eddy current electromagnetic problem solution

    Get PDF
    As engineering problems are becoming more and more advanced, the size of an average model solved by partial differential equations is rapidly growing and, in order to keep simulation times within reasonable bounds, both faster computers and more efficient software implementations are needed. In the first part of this thesis, the full potential of simulation software has been exploited through high performance parallel computing techniques. In particular, the simulation of induction heating processes is accomplished within reasonable solution times, by implementing different parallel direct solvers for large sparse linear system, in the solution process of a commercial software. The performance of such library on shared memory systems has been remarkably improved by implementing a multithreaded version of MUMPS (MUltifrontal Massively Parallel Solver) library, which have been tested on benchmark matrices arising from typical induction heating process simulations. A new multithreading approach and a low rank approximation technique have been implemented and developed by MUMPS team in Lyon and Toulouse. In the context of a collaboration between MUMPS team and DII-University of Padova, a preliminary version of such functionalities could be tested on induction heating benchmark problems, and a substantial reduction of the computational cost and memory requirements could be achieved. In the second part of this thesis, some examples of design methodology by virtual prototyping have been described. Complex multiphysics simulations involving electromagnetic, circuital, thermal and mechanical problems have been performed by exploiting parallel solvers, as developed in the first part of this thesis. Finally, multiobjective stochastic optimization algorithms have been applied to multiphysics 3D model simulations in search of a set of improved induction heating device configurations
    corecore