112,148 research outputs found

    Competitive Online Search Trees on Trees

    Full text link
    We consider the design of adaptive data structures for searching elements of a tree-structured space. We use a natural generalization of the rotation-based online binary search tree model in which the underlying search space is the set of vertices of a tree. This model is based on a simple structure for decomposing graphs, previously known under several names including elimination trees, vertex rankings, and tubings. The model is equivalent to the classical binary search tree model exactly when the underlying tree is a path. We describe an online O(loglogn)O(\log \log n)-competitive search tree data structure in this model, matching the best known competitive ratio of binary search trees. Our method is inspired by Tango trees, an online binary search tree algorithm, but critically needs several new notions including one which we call Steiner-closed search trees, which may be of independent interest. Moreover our technique is based on a novel use of two levels of decomposition, first from search space to a set of Steiner-closed trees, and secondly from these trees into paths

    Neural Distributed Autoassociative Memories: A Survey

    Full text link
    Introduction. Neural network models of autoassociative, distributed memory allow storage and retrieval of many items (vectors) where the number of stored items can exceed the vector dimension (the number of neurons in the network). This opens the possibility of a sublinear time search (in the number of stored items) for approximate nearest neighbors among vectors of high dimension. The purpose of this paper is to review models of autoassociative, distributed memory that can be naturally implemented by neural networks (mainly with local learning rules and iterative dynamics based on information locally available to neurons). Scope. The survey is focused mainly on the networks of Hopfield, Willshaw and Potts, that have connections between pairs of neurons and operate on sparse binary vectors. We discuss not only autoassociative memory, but also the generalization properties of these networks. We also consider neural networks with higher-order connections and networks with a bipartite graph structure for non-binary data with linear constraints. Conclusions. In conclusion we discuss the relations to similarity search, advantages and drawbacks of these techniques, and topics for further research. An interesting and still not completely resolved question is whether neural autoassociative memories can search for approximate nearest neighbors faster than other index structures for similarity search, in particular for the case of very high dimensional vectors.Comment: 31 page

    Tabu search model selection for SVM

    Get PDF
    International audienceA model selection method based on tabu search is proposed to build support vector machines (binary decision functions) of reduced complexity and efficient generalization. The aim is to build a fast and efficient support vector machines classifier. A criterion is defined to evaluate the decision function quality which blends recognition rate and the complexity of a binary decision functions together. The selection of the simplification level by vector quantization, of a feature subset and of support vector machines hyperparameters are performed by tabu search method to optimize the defined decision function quality criterion in order to find a good sub-optimal model on tractable times

    A method to search for long duration gravitational wave transients from isolated neutron stars using the generalized FrequencyHough

    Full text link
    We describe a method to detect gravitational waves lasting O(hoursdays)O(hours-days) emitted by young, isolated neutron stars, such as those that could form after a supernova or a binary neutron star merger, using advanced LIGO/Virgo data. The method is based on a generalization of the FrequencyHough (FH), a pipeline that performs hierarchical searches for continuous gravitational waves by mapping points in the time/frequency plane of the detector to lines in the frequency/spindown plane of the source. We show that signals whose spindowns are related to their frequencies by a power law can be transformed to coordinates where the behavior of these signals is always linear, and can therefore be searched for by the FH. We estimate the sensitivity of our search across different braking indices, and describe the portion of the parameter space we could explore in a search using varying fast Fourier Transform (FFT) lengths.Comment: 15 figure

    From Specific to Generic Learned Sorted Set Dictionaries: A Theoretically Sound Paradigm Yelding Competitive Data Structural Boosters in Practice

    Full text link
    This research concerns Learned Data Structures, a recent area that has emerged at the crossroad of Machine Learning and Classic Data Structures. It is methodologically important and with a high practical impact. We focus on Learned Indexes, i.e., Learned Sorted Set Dictionaries. The proposals available so far are specific in the sense that they can boost, indeed impressively, the time performance of Table Search Procedures with a sorted layout only, e.g., Binary Search. We propose a novel paradigm that, complementing known specialized ones, can produce Learned versions of any Sorted Set Dictionary, for instance, Balanced Binary Search Trees or Binary Search on layouts other that sorted, i.e., Eytzinger. Theoretically, based on it, we obtain several results of interest, such as (a) the first Learned Optimum Binary Search Forest, with mean access time bounded by the Entropy of the probability distribution of the accesses to the Dictionary; (b) the first Learned Sorted Set Dictionary that, in the Dynamic Case and in an amortized analysis setting, matches the same time bounds known for Classic Dictionaries. This latter under widely accepted assumptions regarding the size of the Universe. The experimental part, somewhat complex in terms of software development, clearly indicates the nonobvious finding that the generalization we propose can yield effective and competitive Learned Data Structural Booster, even with respect to specific benchmark models
    corecore