4,721 research outputs found

    Learning Convex Partitions and Computing Game-theoretic Equilibria from Best Response Queries

    Full text link
    Suppose that an mm-simplex is partitioned into nn convex regions having disjoint interiors and distinct labels, and we may learn the label of any point by querying it. The learning objective is to know, for any point in the simplex, a label that occurs within some distance ϵ\epsilon from that point. We present two algorithms for this task: Constant-Dimension Generalised Binary Search (CD-GBS), which for constant mm uses poly(n,log(1ϵ))poly(n, \log \left( \frac{1}{\epsilon} \right)) queries, and Constant-Region Generalised Binary Search (CR-GBS), which uses CD-GBS as a subroutine and for constant nn uses poly(m,log(1ϵ))poly(m, \log \left( \frac{1}{\epsilon} \right)) queries. We show via Kakutani's fixed-point theorem that these algorithms provide bounds on the best-response query complexity of computing approximate well-supported equilibria of bimatrix games in which one of the players has a constant number of pure strategies. We also partially extend our results to games with multiple players, establishing further query complexity bounds for computing approximate well-supported equilibria in this setting.Comment: 38 pages, 7 figures, second version strengthens lower bound in Theorem 6, adds footnotes with additional comments and fixes typo

    Feature learning in feature-sample networks using multi-objective optimization

    Full text link
    Data and knowledge representation are fundamental concepts in machine learning. The quality of the representation impacts the performance of the learning model directly. Feature learning transforms or enhances raw data to structures that are effectively exploited by those models. In recent years, several works have been using complex networks for data representation and analysis. However, no feature learning method has been proposed for such category of techniques. Here, we present an unsupervised feature learning mechanism that works on datasets with binary features. First, the dataset is mapped into a feature--sample network. Then, a multi-objective optimization process selects a set of new vertices to produce an enhanced version of the network. The new features depend on a nonlinear function of a combination of preexisting features. Effectively, the process projects the input data into a higher-dimensional space. To solve the optimization problem, we design two metaheuristics based on the lexicographic genetic algorithm and the improved strength Pareto evolutionary algorithm (SPEA2). We show that the enhanced network contains more information and can be exploited to improve the performance of machine learning methods. The advantages and disadvantages of each optimization strategy are discussed.Comment: 7 pages, 4 figure

    In Defense of DEFECT or Cooperation does not Justify the Solution Concept

    Get PDF
    The one-state machine that always defects is the only evolutionarily stable strategy in the machine game that is derived from the prisoners' dilemma, when preferences are lexicographic in the complexity. This machine is the only stochastically stable strategy of the machine game when players are restricted to choosing machines with a uniformly bounded complexity.Cooperation; prisoners' dilemma; automata; evolution.

    Designing IS service strategy: an information acceleration approach

    Get PDF
    Information technology-based innovation involves considerable risk that requires insight and foresight. Yet, our understanding of how managers develop the insight to support new breakthrough applications is limited and remains obscured by high levels of technical and market uncertainty. This paper applies a new experimental method based on “discrete choice analysis” and “information acceleration” to directly examine how decisions are made in a way that is behaviourally sound. The method is highly applicable to information systems researchers because it provides relative importance measures on a common scale, greater control over alternate explanations and stronger evidence of causality. The practical implications are that information acceleration reduces the levels of uncertainty and generates a more accurate rationale for IS service strategy decisions

    Histogram-Aware Sorting for Enhanced Word-Aligned Compression in Bitmap Indexes

    Get PDF
    Bitmap indexes must be compressed to reduce input/output costs and minimize CPU usage. To accelerate logical operations (AND, OR, XOR) over bitmaps, we use techniques based on run-length encoding (RLE), such as Word-Aligned Hybrid (WAH) compression. These techniques are sensitive to the order of the rows: a simple lexicographical sort can divide the index size by 9 and make indexes several times faster. We investigate reordering heuristics based on computed attribute-value histograms. Simply permuting the columns of the table based on these histograms can increase the sorting efficiency by 40%.Comment: To appear in proceedings of DOLAP 200
    corecore