5,200 research outputs found

    Shaping Social Activity by Incentivizing Users

    Full text link
    Events in an online social network can be categorized roughly into endogenous events, where users just respond to the actions of their neighbors within the network, or exogenous events, where users take actions due to drives external to the network. How much external drive should be provided to each user, such that the network activity can be steered towards a target state? In this paper, we model social events using multivariate Hawkes processes, which can capture both endogenous and exogenous event intensities, and derive a time dependent linear relation between the intensity of exogenous events and the overall network activity. Exploiting this connection, we develop a convex optimization framework for determining the required level of external drive in order for the network to reach a desired activity level. We experimented with event data gathered from Twitter, and show that our method can steer the activity of the network more accurately than alternatives

    Maximin Safety: When Failing to Lose is Preferable to Trying to Win

    Full text link
    We present a new decision rule, \emph{maximin safety}, that seeks to maintain a large margin from the worst outcome, in much the same way minimax regret seeks to minimize distance from the best. We argue that maximin safety is valuable both descriptively and normatively. Descriptively, maximin safety explains the well-known \emph{decoy effect}, in which the introduction of a dominated option changes preferences among the other options. Normatively, we provide an axiomatization that characterizes preferences induced by maximin safety, and show that maximin safety shares much of the same behavioral basis with minimax regret.Comment: 14 page

    Multicriteria ranking using weights which minimize the score range

    Get PDF
    Various schemes have been proposed for generating a set of non-subjective weights when aggregating multiple criteria for the purposes of ranking or selecting alternatives. The maximin approach chooses the weights which maximise the lowest score (assuming there is an upper bound to scores). This is equivalent to finding the weights which minimize the maximum deviation, or range, between the worst and best scores (minimax). At first glance this seems to be an equitable way of apportioning weight, and the Rawlsian theory of justice has been cited in its support.We draw a distinction between using the maximin rule for the purpose of assessing performance, and using it for allocating resources amongst the alternatives. We demonstrate that it has a number of drawbacks which make it inappropriate for the assessment of performance. Specifically, it is tantamount to allowing the worst performers to decide the worth of the criteria so as to maximise their overall score. Furthermore, when making a selection from a list of alternatives, the final choice is highly sensitive to the removal or inclusion of alternatives whose performance is so poor that they are clearly irrelevant to the choice at hand

    Testing the isotropy of high energy cosmic rays using spherical needlets

    Full text link
    For many decades, ultrahigh energy charged particles of unknown origin that can be observed from the ground have been a puzzle for particle physicists and astrophysicists. As an attempt to discriminate among several possible production scenarios, astrophysicists try to test the statistical isotropy of the directions of arrival of these cosmic rays. At the highest energies, they are supposed to point toward their sources with good accuracy. However, the observations are so rare that testing the distribution of such samples of directional data on the sphere is nontrivial. In this paper, we choose a nonparametric framework that makes weak hypotheses on the alternative distributions and allows in turn to detect various and possibly unexpected forms of anisotropy. We explore two particular procedures. Both are derived from fitting the empirical distribution with wavelet expansions of densities. We use the wavelet frame introduced by [SIAM J. Math. Anal. 38 (2006b) 574-594 (electronic)], the so-called needlets. The expansions are truncated at scale indices no larger than some J⋆{J^{\star}}, and the LpL^p distances between those estimates and the null density are computed. One family of tests (called Multiple) is based on the idea of testing the distance from the null for each choice of J=1,…,J⋆J=1,\ldots,{J^{\star}}, whereas the so-called PlugIn approach is based on the single full J⋆{J^{\star}} expansion, but with thresholded wavelet coefficients. We describe the practical implementation of these two procedures and compare them to other methods in the literature. As alternatives to isotropy, we consider both very simple toy models and more realistic nonisotropic models based on Physics-inspired simulations. The Monte Carlo study shows good performance of the Multiple test, even at moderate sample size, for a wide sample of alternative hypotheses and for different choices of the parameter J⋆{J^{\star}}. On the 69 most energetic events published by the Pierre Auger Collaboration, the needlet-based procedures suggest statistical evidence for anisotropy. Using several values for the parameters of the methods, our procedures yield pp-values below 1%, but with uncontrolled multiplicity issues. The flexibility of this method and the possibility to modify it to take into account a large variety of extensions of the problem make it an interesting option for future investigation of the origin of ultrahigh energy cosmic rays.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS619 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A Note on Minimax Testing and Confidence Intervals in Moment Inequality Models

    Get PDF
    This note uses a simple example to show how moment inequality models used in the empirical economics literature lead to general minimax relative efficiency comparisons. The main point is that such models involve inference on a low dimensional parameter, which leads naturally to a definition of "distance" that, in full generality, would be arbitrary in minimax testing problems. This definition of distance is justified by the fact that it leads to a duality between minimaxity of confidence intervals and tests, which does not hold for other definitions of distance. Thus, the use of moment inequalities for inference in a low dimensional parametric model places additional structure on the testing problem, which leads to stronger conclusions regarding minimax relative efficiency than would otherwise be possible

    Adaptive goodness-of-fit tests in a density model

    Full text link
    Given an i.i.d. sample drawn from a density ff, we propose to test that ff equals some prescribed density f0f_0 or that ff belongs to some translation/scale family. We introduce a multiple testing procedure based on an estimation of the L2\mathbb{L}_2-distance between ff and f0f_0 or between ff and the parametric family that we consider. For each sample size nn, our test has level of significance α\alpha. In the case of simple hypotheses, we prove that our test is adaptive: it achieves the optimal rates of testing established by Ingster [J. Math. Sci. 99 (2000) 1110--1119] over various classes of smooth functions simultaneously. As for composite hypotheses, we obtain similar results up to a logarithmic factor. We carry out a simulation study to compare our procedures with the Kolmogorov--Smirnov tests, or with goodness-of-fit tests proposed by Bickel and Ritov [in Nonparametric Statistics and Related Topics (1992) 51--57] and by Kallenberg and Ledwina [Ann. Statist. 23 (1995) 1594--1608].Comment: Published at http://dx.doi.org/10.1214/009053606000000119 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Adaptive Test of Conditional Moment Inequalities

    Full text link
    In this paper, I construct a new test of conditional moment inequalities, which is based on studentized kernel estimates of moment functions with many different values of the bandwidth parameter. The test automatically adapts to the unknown smoothness of moment functions and has uniformly correct asymptotic size. The test has high power in a large class of models with conditional moment inequalities. Some existing tests have nontrivial power against n^{-1/2}-local alternatives in a certain class of these models whereas my method only allows for nontrivial testing against (n/\log n)^{-1/2}-local alternatives in this class. There exist, however, other classes of models with conditional moment inequalities where the mentioned tests have much lower power in comparison with the test developed in this paper
    • …
    corecore