1,047 research outputs found

    Hypercubes and Compromise Values for Cooperative Fuzzy Games

    Get PDF
    AMS classification: 90D12; 03E72;cooperative games;Compromise values;Core;Fuzzy coalitions;Fuzzy games;Hypercubes;Path solutions;Weber set

    Harmonious Hilbert curves and other extradimensional space-filling curves

    Full text link
    This paper introduces a new way of generalizing Hilbert's two-dimensional space-filling curve to arbitrary dimensions. The new curves, called harmonious Hilbert curves, have the unique property that for any d' < d, the d-dimensional curve is compatible with the d'-dimensional curve with respect to the order in which the curves visit the points of any d'-dimensional axis-parallel space that contains the origin. Similar generalizations to arbitrary dimensions are described for several variants of Peano's curve (the original Peano curve, the coil curve, the half-coil curve, and the Meurthe curve). The d-dimensional harmonious Hilbert curves and the Meurthe curves have neutral orientation: as compared to the curve as a whole, arbitrary pieces of the curve have each of d! possible rotations with equal probability. Thus one could say these curves are `statistically invariant' under rotation---unlike the Peano curves, the coil curves, the half-coil curves, and the familiar generalization of Hilbert curves by Butz and Moore. In addition, prompted by an application in the construction of R-trees, this paper shows how to construct a 2d-dimensional generalized Hilbert or Peano curve that traverses the points of a certain d-dimensional diagonally placed subspace in the order of a given d-dimensional generalized Hilbert or Peano curve. Pseudocode is provided for comparison operators based on the curves presented in this paper.Comment: 40 pages, 10 figures, pseudocode include

    Inference Based on Conditional Moment Inequalities

    Get PDF
    In this paper, we propose an instrumental variable approach to constructing confidence sets (CS's) for the true parameter in models defined by conditional moment inequalities/equalities. We show that by properly choosing instrument functions, one can transform conditional moment inequalities/equalities into unconditional ones without losing identification power. Based on the unconditional moment inequalities/equalities, we construct CS's by inverting Cramer-von Mises-type or Kolmogorov-Smirnov-type tests. Critical values are obtained using generalized moment selection (GMS) procedures. We show that the proposed CS's have correct uniform asymptotic coverage probabilities. New methods are required to establish these results because an infinite-dimensional nuisance parameter affects the asymptotic distributions. We show that the tests considered are consistent against all fixed alternatives and have power against some n^{-1/2}-local alternatives, though not all such alternatives. Monte Carlo simulations for three different models show that the methods perform well in finite samples.Asymptotic size, asymptotic power, conditional moment inequalities, confidence set, Cramer-von Mises, generalized moment selection, Kolmogorov-Smirnov, moment inequalities

    Distributed Online Learning via Cooperative Contextual Bandits

    Full text link
    In this paper we propose a novel framework for decentralized, online learning by many learners. At each moment of time, an instance characterized by a certain context may arrive to each learner; based on the context, the learner can select one of its own actions (which gives a reward and provides information) or request assistance from another learner. In the latter case, the requester pays a cost and receives the reward but the provider learns the information. In our framework, learners are modeled as cooperative contextual bandits. Each learner seeks to maximize the expected reward from its arrivals, which involves trading off the reward received from its own actions, the information learned from its own actions, the reward received from the actions requested of others and the cost paid for these actions - taking into account what it has learned about the value of assistance from each other learner. We develop distributed online learning algorithms and provide analytic bounds to compare the efficiency of these with algorithms with the complete knowledge (oracle) benchmark (in which the expected reward of every action in every context is known by every learner). Our estimates show that regret - the loss incurred by the algorithm - is sublinear in time. Our theoretical framework can be used in many practical applications including Big Data mining, event detection in surveillance sensor networks and distributed online recommendation systems
    • 

    corecore