1,654 research outputs found

    Tight Bounds for the Cover Times of Random Walks with Heterogeneous Step Lengths

    Get PDF
    Search patterns of randomly oriented steps of different lengths have been observed on all scales of the biological world, ranging from the microscopic to the ecological, including in protein motors, bacteria, T-cells, honeybees, marine predators, and more. Through different models, it has been demonstrated that adopting a variety in the magnitude of the step lengths can greatly improve the search efficiency. However, the precise connection between the search efficiency and the number of step lengths in the repertoire of the searcher has not been identified. Motivated by biological examples in one-dimensional terrains, a recent paper studied the best cover time on an n-node cycle that can be achieved by a random walk process that uses k step lengths. By tuning the lengths and corresponding probabilities the authors therein showed that the best cover time is roughly n 1+Θ(1/k). While this bound is useful for large values of k, it is hardly informative for small k values, which are of interest in biology. In this paper, we provide a tight bound for the cover time of such a walk, for every integer k > 1. Specifically, up to lower order polylogarithmic factors, the upper bound on the cover time is a polynomial in n of exponent 1+ 1/(2k−1). For k = 2, 3, 4 and 5 the exponent is thus 4/3 , 6/5 , 8/7 , and 10/9 , respectively. Informally, our result implies that, as long as the number of step lengths k is not too large, incorporating an additional step length to the repertoire of the process enables to improve the cover time by a polynomial factor, but the extent of the improvement gradually decreases with k

    Dissimilarity Clustering by Hierarchical Multi-Level Refinement

    Full text link
    We introduce in this paper a new way of optimizing the natural extension of the quantization error using in k-means clustering to dissimilarity data. The proposed method is based on hierarchical clustering analysis combined with multi-level heuristic refinement. The method is computationally efficient and achieves better quantization errors than theComment: 20-th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2012), Bruges : Belgium (2012

    Підготовка майбутнього вчителя початкових класів до білінгвального навчання молодших школярів як методична проблема

    Get PDF
    (uk) В статті здійснено загальний аналіз проблеми професійної підготовки майбутнього вчителя початкових класів до білінгвального навчання. Визначено основи білінгвальної освіти та умови їхньої реалізації майбутніми фахівцями.(en) In the article the problem of training future elementary school teachers for bilingual teaching is analyzed. The bases of bilingual education and terms of their implementation by future specialists are determined

    A Predictive Approach to Bayesian Nonparametric Survival Analysis

    Get PDF
    Bayesian nonparametric methods are a popular choice for analysing survival data due to their ability to flexibly model the distribution of survival times. These methods typically employ a nonparametric prior on the survival function that is conjugate with respect to right-censored data. Eliciting these priors, particularly in the presence of covariates, can be challenging and inference typically relies on computationally intensive Markov chain Monte Carlo schemes. In this paper, we build on recent work that recasts Bayesian inference as assigning a predictive distribution on the unseen values of a population conditional on the observed samples, thus avoiding the need to specify a complex prior. We describe a copula-based predictive update which admits a scalable sequential importance sampling algorithm to perform inference that properly accounts for right-censoring. We provide theoretical justification through an extension of Doob’s consistency theorem and illustrate the method on a number of simulated and real data sets, including an example with covariates. Our approach enables analysts to perform Bayesian nonparametric inference through only the specification of a predictive distribution

    Manifestations of Local Supersolidity of 4^{4}He around a Charged Molecular Impurity

    Full text link
    A frozen, solid helium core, dubbed snowball, is typically observed around cations in liquid helium. Here we discover, using path integral simulations, that around a cationic molecular impurity, protonated methane, the 4^4He atoms are indeed strongly localized akin to snowballs but still participate in vivid bosonic exchange induced by the ro-vibrational motion of the impurity. Such combination of solid-like order with pronounced superfluid response in the first helium shell indicates that manifestations of local supersolid behavior of 4^4He can be induced -- and probed experimentally -- by charged molecules

    PAC-learning gains of Turing machines over circuits and neural networks

    Full text link
    A caveat to many applications of the current Deep Learning approach is the need for large-scale data. One improvement suggested by Kolmogorov Complexity results is to apply the minimum description length principle with computationally universal models. We study the potential gains in sample efficiency that this approach can bring in principle. We use polynomial-time Turing machines to represent computationally universal models and Boolean circuits to represent Artificial Neural Networks (ANNs) acting on finite-precision digits. Our analysis unravels direct links between our question and Computational Complexity results. We provide lower and upper bounds on the potential gains in sample efficiency between the MDL applied with Turing machines instead of ANNs. Our bounds depend on the bit-size of the input of the Boolean function to be learned. Furthermore, we highlight close relationships between classical open problems in Circuit Complexity and the tightness of these

    Efficiency Separation between RL Methods: Model-Free, Model-Based and Goal-Conditioned

    Full text link
    We prove a fundamental limitation on the efficiency of a wide class of Reinforcement Learning (RL) algorithms. This limitation applies to model-free RL methods as well as a broad range of model-based methods, such as planning with tree search. Under an abstract definition of this class, we provide a family of RL problems for which these methods suffer a lower bound exponential in the horizon for their interactions with the environment to find an optimal behavior. However, there exists a method, not tailored to this specific family of problems, which can efficiently solve the problems in the family. In contrast, our limitation does not apply to several types of methods proposed in the literature, for instance, goal-conditioned methods or other algorithms that construct an inverse dynamics model
    corecore