673 research outputs found

    Improved bounds and algorithms for graph cuts and network reliability

    Full text link
    Karger (SIAM Journal on Computing, 1999) developed the first fully-polynomial approximation scheme to estimate the probability that a graph GG becomes disconnected, given that its edges are removed independently with probability pp. This algorithm runs in n5+o(1)ϵ3n^{5+o(1)} \epsilon^{-3} time to obtain an estimate within relative error ϵ\epsilon. We improve this run-time through algorithmic and graph-theoretic advances. First, there is a certain key sub-problem encountered by Karger, for which a generic estimation procedure is employed, we show that this has a special structure for which a much more efficient algorithm can be used. Second, we show better bounds on the number of edge cuts which are likely to fail. Here, Karger's analysis uses a variety of bounds for various graph parameters, we show that these bounds cannot be simultaneously tight. We describe a new graph parameter, which simultaneously influences all the bounds used by Karger, and obtain much tighter estimates of the cut structure of GG. These techniques allow us to improve the runtime to n3+o(1)ϵ2n^{3+o(1)} \epsilon^{-2}, our results also rigorously prove certain experimental observations of Karger & Tai (Proc. ACM-SIAM Symposium on Discrete Algorithms, 1997). Our rigorous proofs are motivated by certain non-rigorous differential-equation approximations which, however, provably track the worst-case trajectories of the relevant parameters. A key driver of Karger's approach (and other cut-related results) is a bound on the number of small cuts: we improve these estimates when the min-cut size is "small" and odd, augmenting, in part, a result of Bixby (Bulletin of the AMS, 1974)

    Analysis of pivot sampling in dual-pivot Quicksort: A holistic analysis of Yaroslavskiy's partitioning scheme

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1007/s00453-015-0041-7The new dual-pivot Quicksort by Vladimir Yaroslavskiy-used in Oracle's Java runtime library since version 7-features intriguing asymmetries. They make a basic variant of this algorithm use less comparisons than classic single-pivot Quicksort. In this paper, we extend the analysis to the case where the two pivots are chosen as fixed order statistics of a random sample. Surprisingly, dual-pivot Quicksort then needs more comparisons than a corresponding version of classic Quicksort, so it is clear that counting comparisons is not sufficient to explain the running time advantages observed for Yaroslavskiy's algorithm in practice. Consequently, we take a more holistic approach and give also the precise leading term of the average number of swaps, the number of executed Java Bytecode instructions and the number of scanned elements, a new simple cost measure that approximates I/O costs in the memory hierarchy. We determine optimal order statistics for each of the cost measures. It turns out that the asymmetries in Yaroslavskiy's algorithm render pivots with a systematic skew more efficient than the symmetric choice. Moreover, we finally have a convincing explanation for the success of Yaroslavskiy's algorithm in practice: compared with corresponding versions of classic single-pivot Quicksort, dual-pivot Quicksort needs significantly less I/Os, both with and without pivot sampling.Peer ReviewedPostprint (author's final draft

    Desorption of alkali atoms from 4He nanodroplets

    Get PDF
    The dynamics following the photoexcitation of Na and Li atoms located on the surface of helium nanodroplets has been investigated in a joint experimental and theoretical study. Photoelectron spectroscopy has revealed that excitation of the alkali atoms via the (n+1) -> ns transition leads to the desorption of these atoms. The mean kinetic energy of the desorbed atoms, as determined by ion imaging, shows a linear dependence on excitation frequency. These experimental findings are analyzed within a three-dimensional, time-dependent density functional approach for the helium droplet combined with a Bohmian dynamics description of the desorbing atom. This hybrid method reproduces well the key experimental observables. The dependence of the observables on the impurity mass is discussed by comparing the results obtained for the 6Li and 7Li isotopes. The calculations show that the desorption of the excited alkali atom is accompanied by the creation of highly non-linear density waves in the helium droplet that propagate at supersonic velocities

    Space-Efficient DFS and Applications: Simpler, Leaner, Faster

    Full text link
    The problem of space-efficient depth-first search (DFS) is reconsidered. A particularly simple and fast algorithm is presented that, on a directed or undirected input graph G=(V,E)G=(V,E) with nn vertices and mm edges, carries out a DFS in O(n+m)O(n+m) time with n+vV3log2(dv1)+O(logn)n+m+O(logn)n+\sum_{v\in V_{\ge 3}}\lceil{\log_2(d_v-1)}\rceil +O(\log n)\le n+m+O(\log n) bits of working memory, where dvd_v is the (total) degree of vv, for each vVv\in V, and V3={vVdv3}V_{\ge 3}=\{v\in V\mid d_v\ge 3\}. A slightly more complicated variant of the algorithm works in the same time with at most n+(4/5)m+O(logn)n+({4/5})m+O(\log n) bits. It is also shown that a DFS can be carried out in a graph with nn vertices and mm edges in O(n+mlog ⁣n)O(n+m\log^*\! n) time with O(n)O(n) bits or in O(n+m)O(n+m) time with either O(nloglog(4+m/n))O(n\log\log(4+{m/n})) bits or, for arbitrary integer k1k\ge 1, O(nlog(k) ⁣n)O(n\log^{(k)}\! n) bits. These results among them subsume or improve most earlier results on space-efficient DFS. Some of the new time and space bounds are shown to extend to applications of DFS such as the computation of cut vertices, bridges, biconnected components and 2-edge-connected components in undirected graphs

    Fluctuations of fragment observables

    Get PDF
    This contribution presents a review of our present theoretical as well as experimental knowledge of different fluctuation observables relevant to nuclear multifragmentation. The possible connection between the presence of a fluctuation peak and the occurrence of a phase transition or a critical phenomenon is critically analyzed. Many different phenomena can lead both to the creation and to the suppression of a fluctuation peak. In particular, the role of constraints due to conservation laws and to data sorting is shown to be essential. From the experimental point of view, a comparison of the available fragmentation data reveals that there is a good agreement between different data sets of basic fluctuation observables, if the fragmenting source is of comparable size. This compatibility suggests that the fragmentation process is largely independent of the reaction mechanism (central versus peripheral collisions, symmetric versus asymmetric systems, light ions versus heavy ion induced reactions). Configurational energy fluctuations, that may give important information on the heat capacity of the fragmenting system at the freeze out stage, are not fully compatible among different data sets and require further analysis to properly account for Coulomb effects and secondary decays. Some basic theoretical questions, concerning the interplay between the dynamics of the collision and the fragmentation process, and the cluster definition in dense and hot media, are still open and are addressed at the end of the paper. A comparison with realistic models and/or a quantitative analysis of the fluctuation properties will be needed to clarify in the next future the nature of the transition observed from compound nucleus evaporation to multi-fragment production.Comment: Contribution to WCI (World Consensus Initiative) Book " "Dynamics and Thermodynamics with Nuclear Degrees of Freedom", to appear on Euorpean Physics Journal A as part of the Topical Volume. 9 pages, 12 figure
    corecore