22,394 research outputs found

    Fringe trees, Crump-Mode-Jagers branching processes and mm-ary search trees

    Full text link
    This survey studies asymptotics of random fringe trees and extended fringe trees in random trees that can be constructed as family trees of a Crump-Mode-Jagers branching process, stopped at a suitable time. This includes random recursive trees, preferential attachment trees, fragmentation trees, binary search trees and (more generally) mm-ary search trees, as well as some other classes of random trees. We begin with general results, mainly due to Aldous (1991) and Jagers and Nerman (1984). The general results are applied to fringe trees and extended fringe trees for several particular types of random trees, where the theory is developed in detail. In particular, we consider fringe trees of mm-ary search trees in detail; this seems to be new. Various applications are given, including degree distribution, protected nodes and maximal clades for various types of random trees. Again, we emphasise results for mm-ary search trees, and give for example new results on protected nodes in mm-ary search trees. A separate section surveys results on height, saturation level, typical depth and total path length, due to Devroye (1986), Biggins (1995, 1997) and others. This survey contains well-known basic results together with some additional general results as well as many new examples and applications for various classes of random trees

    All-Pairs Minimum Cuts in Near-Linear Time for Surface-Embedded Graphs

    Get PDF
    For an undirected nn-vertex graph GG with non-negative edge-weights, we consider the following type of query: given two vertices ss and tt in GG, what is the weight of a minimum stst-cut in GG? We solve this problem in preprocessing time O(nlog⁥3n)O(n\log^3 n) for graphs of bounded genus, giving the first sub-quadratic time algorithm for this class of graphs. Our result also improves by a logarithmic factor a previous algorithm by Borradaile, Sankowski and Wulff-Nilsen (FOCS 2010) that applied only to planar graphs. Our algorithm constructs a Gomory-Hu tree for the given graph, providing a data structure with space O(n)O(n) that can answer minimum-cut queries in constant time. The dependence on the genus of the input graph in our preprocessing time is 2O(g2)2^{O(g^2)}

    A Multi Hidden Recurrent Neural Network with a Modified Grey Wolf Optimizer

    Full text link
    Identifying university students' weaknesses results in better learning and can function as an early warning system to enable students to improve. However, the satisfaction level of existing systems is not promising. New and dynamic hybrid systems are needed to imitate this mechanism. A hybrid system (a modified Recurrent Neural Network with an adapted Grey Wolf Optimizer) is used to forecast students' outcomes. This proposed system would improve instruction by the faculty and enhance the students' learning experiences. The results show that a modified recurrent neural network with an adapted Grey Wolf Optimizer has the best accuracy when compared with other models.Comment: 34 pages, published in PLoS ON

    Boosting with early stopping: Convergence and consistency

    Full text link
    Boosting is one of the most significant advances in machine learning for classification and regression. In its original and computationally flexible version, boosting seeks to minimize empirically a loss function in a greedy fashion. The resulting estimator takes an additive function form and is built iteratively by applying a base estimator (or learner) to updated samples depending on the previous iterations. An unusual regularization technique, early stopping, is employed based on CV or a test set. This paper studies numerical convergence, consistency and statistical rates of convergence of boosting with early stopping, when it is carried out over the linear span of a family of basis functions. For general loss functions, we prove the convergence of boosting's greedy optimization to the infinimum of the loss function over the linear span. Using the numerical convergence result, we find early-stopping strategies under which boosting is shown to be consistent based on i.i.d. samples, and we obtain bounds on the rates of convergence for boosting estimators. Simulation studies are also presented to illustrate the relevance of our theoretical results for providing insights to practical aspects of boosting. As a side product, these results also reveal the importance of restricting the greedy search step-sizes, as known in practice through the work of Friedman and others. Moreover, our results lead to a rigorous proof that for a linearly separable problem, AdaBoost with \epsilon\to0 step-size becomes an L^1-margin maximizer when left to run to convergence.Comment: Published at http://dx.doi.org/10.1214/009053605000000255 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Using Localised ‘Gossip’ to Structure Distributed Learning

    Get PDF
    The idea of a “memetic” spread of solutions through a human culture in parallel to their development is applied as a distributed approach to learning. Local parts of a problem are associated with a set of overlappingt localities in a space and solutions are then evolved in those localites. Good solutions are not only crossed with others to search for better solutions but also they propogate across the areas of the problem space where they are relatively successful. Thus the whole population co-evolves solutions with the domains in which they are found to work. This approach is compared to the equivalent global evolutionary computation approach with respect to predicting the occcurence of heart disease in the Cleveland data set. It greatly outperforms the global approach, but the space of attributes within which this evolutionary process occurs can effect its efficiency
    • …
    corecore