3,606 research outputs found

    Generalized Team Draft Interleaving

    Get PDF
    Interleaving is an online evaluation method that compares two ranking functions by mixing their results and interpret- ing the users' click feedback. An important property of an interleaving method is its sensitivity, i.e. the ability to obtain reliable comparison outcomes with few user interac- tions. Several methods have been proposed so far to im- prove interleaving sensitivity, which can be roughly divided into two areas: (a) methods that optimize the credit assign- ment function (how the click feedback is interpreted), and (b) methods that achieve higher sensitivity by controlling the interleaving policy (how often a particular interleaved result page is shown). In this paper, we propose an interleaving framework that generalizes the previously studied interleaving methods in two aspects. First, it achieves a higher sensitivity by per- forming a joint data-driven optimization of the credit as- signment function and the interleaving policy. Second, we formulate the framework to be general w.r.t. the search do- main where the interleaving experiment is deployed, so that it can be applied in domains with grid-based presentation, such as image search. In order to simplify the optimization, we additionally introduce a stratifed estimate of the exper- iment outcome. This stratifcation is also useful on its own, as it reduces the variance of the outcome and thus increases the interleaving sensitivity. We perform an extensive experimental study using large- scale document and image search datasets obtained from a commercial search engine. The experiments show that our proposed framework achieves marked improvements in sensitivity over efective baselines on both datasets

    Lazy Model Expansion: Interleaving Grounding with Search

    Full text link
    Finding satisfying assignments for the variables involved in a set of constraints can be cast as a (bounded) model generation problem: search for (bounded) models of a theory in some logic. The state-of-the-art approach for bounded model generation for rich knowledge representation languages, like ASP, FO(.) and Zinc, is ground-and-solve: reduce the theory to a ground or propositional one and apply a search algorithm to the resulting theory. An important bottleneck is the blowup of the size of the theory caused by the reduction phase. Lazily grounding the theory during search is a way to overcome this bottleneck. We present a theoretical framework and an implementation in the context of the FO(.) knowledge representation language. Instead of grounding all parts of a theory, justifications are derived for some parts of it. Given a partial assignment for the grounded part of the theory and valid justifications for the formulas of the non-grounded part, the justifications provide a recipe to construct a complete assignment that satisfies the non-grounded part. When a justification for a particular formula becomes invalid during search, a new one is derived; if that fails, the formula is split in a part to be grounded and a part that can be justified. The theoretical framework captures existing approaches for tackling the grounding bottleneck such as lazy clause generation and grounding-on-the-fly, and presents a generalization of the 2-watched literal scheme. We present an algorithm for lazy model expansion and integrate it in a model generator for FO(ID), a language extending first-order logic with inductive definitions. The algorithm is implemented as part of the state-of-the-art FO(ID) Knowledge-Base System IDP. Experimental results illustrate the power and generality of the approach

    Optical Flow Requires Multiple Strategies (but only one network)

    Full text link
    We show that the matching problem that underlies optical flow requires multiple strategies, depending on the amount of image motion and other factors. We then study the implications of this observation on training a deep neural network for representing image patches in the context of descriptor based optical flow. We propose a metric learning method, which selects suitable negative samples based on the nature of the true match. This type of training produces a network that displays multiple strategies depending on the input and leads to state of the art results on the KITTI 2012 and KITTI 2015 optical flow benchmarks

    A NeuroGenetic Approach for Multiprocessor Scheduling

    Get PDF
    This chapter presents a NeuroGenetic approach for solving a family of multiprocessor scheduling problems. We address primarily the Job-Shop scheduling problem, one of the hardest of the various scheduling problems. We propose a new approach, the NeuroGenetic approach, which is a hybrid metaheuristic that combines augmented-neural-networks (AugNN) and genetic algorithms-based search methods. The AugNN approach is a nondeterministic iterative local-search method which combines the benefits of a heuristic search and iterative neural-network search. Genetic algorithms based search is particularly good at global search. An interleaved approach between AugNN and GA combines the advantages of local search and global search, thus providing improved solutions compared to AugNN or GA search alone. We discuss the encoding and decoding schemes for switching between GA and AugNN approaches to allow interleaving. The purpose of this study is to empirically test the extent of improvement obtained by using the interleaved hybrid approach instead of applied using a single approach on the job-shop scheduling problem. We also describe the AugNN formulation and a Genetic Algorithm approach for the JobShop problem. We present the results of AugNN, GA and the NeuroGentic approach on some benchmark job-shop scheduling problems

    Reordering Rows for Better Compression: Beyond the Lexicographic Order

    Get PDF
    Sorting database tables before compressing them improves the compression rate. Can we do better than the lexicographical order? For minimizing the number of runs in a run-length encoding compression scheme, the best approaches to row-ordering are derived from traveling salesman heuristics, although there is a significant trade-off between running time and compression. A new heuristic, Multiple Lists, which is a variant on Nearest Neighbor that trades off compression for a major running-time speedup, is a good option for very large tables. However, for some compression schemes, it is more important to generate long runs rather than few runs. For this case, another novel heuristic, Vortex, is promising. We find that we can improve run-length encoding up to a factor of 3 whereas we can improve prefix coding by up to 80%: these gains are on top of the gains due to lexicographically sorting the table. We prove that the new row reordering is optimal (within 10%) at minimizing the runs of identical values within columns, in a few cases.Comment: to appear in ACM TOD

    Local Search Techniques for Constrained Portfolio Selection Problems

    Full text link
    We consider the problem of selecting a portfolio of assets that provides the investor a suitable balance of expected return and risk. With respect to the seminal mean-variance model of Markowitz, we consider additional constraints on the cardinality of the portfolio and on the quantity of individual shares. Such constraints better capture the real-world trading system, but make the problem more difficult to be solved with exact methods. We explore the use of local search techniques, mainly tabu search, for the portfolio selection problem. We compare and combine previous work on portfolio selection that makes use of the local search approach and we propose new algorithms that combine different neighborhood relations. In addition, we show how the use of randomization and of a simple form of adaptiveness simplifies the setting of a large number of critical parameters. Finally, we show how our techniques perform on public benchmarks.Comment: 22 pages, 3 figure

    Comparing metaheuristic algorithms for error detection in Java programs

    Get PDF
    Chicano, F., Ferreira M., & Alba E. (2011). Comparing Metaheuristic Algorithms for Error Detection in Java Programs. In Proceedings of Search Based Software Engineering, Szeged, Hungary, September 10-12, 2011. pp. 82–96.Model checking is a fully automatic technique for checking concurrent software properties in which the states of a concurrent system are explored in an explicit or implicit way. The main drawback of this technique is the high memory consumption, which limits the size of the programs that can be checked. In the last years, some researchers have focused on the application of guided non-complete stochastic techniques to the search of the state space of such concurrent programs. In this paper, we compare five metaheuristic algorithms for this problem. The algorithms are Simulated Annealing, Ant Colony Optimization, Particle Swarm Optimization and two variants of Genetic Algorithm. To the best of our knowledge, it is the first time that Simulated Annealing has been applied to the problem. We use in the comparison a benchmark composed of 17 Java concurrent programs. We also compare the results of these algorithms with the ones of deterministic algorithms.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. This research has been partially funded by the Spanish Ministry of Science and Innovation and FEDER under contract TIN2008-06491-C04-01 (the M∗ project) and the Andalusian Government under contract P07-TIC-03044 (DIRICOM project)
    • …
    corecore