3,720 research outputs found

    A quadtree-based allocation method for a class of large discrete Euclidean location problems: large location problems

    Get PDF
    A special data compression approach using a quadtree-based method is proposed for allocating very large demand points to their nearest facilities while eliminating aggregation error. This allocation procedure is shown to be extremely effective when solving very large facility location problems in the Euclidian space. Our method basically aggregates demand points where it eliminates aggregation-based allocation error, and disaggregates them if necessary. The method is assessed first on the allocation problems and then embedded into the search for solving a class of discrete facility location problems namely the p-median and the vertex p-centre problems. We use randomly generated and TSP datasets for testing our method. The results of the experiments show that the quadtree-based approach is very effective in reducing the computing time for this class of location problems

    Unsupervised Graph-based Rank Aggregation for Improved Retrieval

    Full text link
    This paper presents a robust and comprehensive graph-based rank aggregation approach, used to combine results of isolated ranker models in retrieval tasks. The method follows an unsupervised scheme, which is independent of how the isolated ranks are formulated. Our approach is able to combine arbitrary models, defined in terms of different ranking criteria, such as those based on textual, image or hybrid content representations. We reformulate the ad-hoc retrieval problem as a document retrieval based on fusion graphs, which we propose as a new unified representation model capable of merging multiple ranks and expressing inter-relationships of retrieval results automatically. By doing so, we claim that the retrieval system can benefit from learning the manifold structure of datasets, thus leading to more effective results. Another contribution is that our graph-based aggregation formulation, unlike existing approaches, allows for encapsulating contextual information encoded from multiple ranks, which can be directly used for ranking, without further computations and post-processing steps over the graphs. Based on the graphs, a novel similarity retrieval score is formulated using an efficient computation of minimum common subgraphs. Finally, another benefit over existing approaches is the absence of hyperparameters. A comprehensive experimental evaluation was conducted considering diverse well-known public datasets, composed of textual, image, and multimodal documents. Performed experiments demonstrate that our method reaches top performance, yielding better effectiveness scores than state-of-the-art baseline methods and promoting large gains over the rankers being fused, thus demonstrating the successful capability of the proposal in representing queries based on a unified graph-based model of rank fusions

    Prediction Markets: Alternative Mechanisms for Complex Environments with Few Traders

    Get PDF
    Double auction prediction markets have proven successful in large-scale applications such as elections and sporting events. Consequently, several large corporations have adopted these markets for smaller-scale internal applications where information may be complex and the number of traders is small. Using laboratory experiments, we test the performance of the double auction in complex environments with few traders and compare it to three alternative mechanisms. When information is complex we find that an iterated poll (or Delphi method) outperforms the double auction mechanism. We present five behavioral observations that may explain why the poll performs better in these settings

    Geographically intelligent disclosure control for flexible aggregation of census data

    No full text
    This paper describes a geographically intelligent approach to disclosure control for protecting flexibly aggregated census data. Increased analytical power has stimulated user demand for more detailed information for smaller geographical areas and customized boundaries. Consequently it is vital that improved methods of statistical disclosure control are developed to protect against the increased disclosure risk. Traditionally methods of statistical disclosure control have been aspatial in nature. Here we present a geographically intelligent approach that takes into account the spatial distribution of risk. We describe empirical work illustrating how the flexibility of this new method, called local density swapping, is an improved alternative to random record swapping in terms of risk-utility
    corecore