5,342 research outputs found

    Multicriteria ranking using weights which minimize the score range

    Get PDF
    Various schemes have been proposed for generating a set of non-subjective weights when aggregating multiple criteria for the purposes of ranking or selecting alternatives. The maximin approach chooses the weights which maximise the lowest score (assuming there is an upper bound to scores). This is equivalent to finding the weights which minimize the maximum deviation, or range, between the worst and best scores (minimax). At first glance this seems to be an equitable way of apportioning weight, and the Rawlsian theory of justice has been cited in its support.We draw a distinction between using the maximin rule for the purpose of assessing performance, and using it for allocating resources amongst the alternatives. We demonstrate that it has a number of drawbacks which make it inappropriate for the assessment of performance. Specifically, it is tantamount to allowing the worst performers to decide the worth of the criteria so as to maximise their overall score. Furthermore, when making a selection from a list of alternatives, the final choice is highly sensitive to the removal or inclusion of alternatives whose performance is so poor that they are clearly irrelevant to the choice at hand

    Shaping Social Activity by Incentivizing Users

    Full text link
    Events in an online social network can be categorized roughly into endogenous events, where users just respond to the actions of their neighbors within the network, or exogenous events, where users take actions due to drives external to the network. How much external drive should be provided to each user, such that the network activity can be steered towards a target state? In this paper, we model social events using multivariate Hawkes processes, which can capture both endogenous and exogenous event intensities, and derive a time dependent linear relation between the intensity of exogenous events and the overall network activity. Exploiting this connection, we develop a convex optimization framework for determining the required level of external drive in order for the network to reach a desired activity level. We experimented with event data gathered from Twitter, and show that our method can steer the activity of the network more accurately than alternatives

    Detection of a sparse submatrix of a high-dimensional noisy matrix

    Full text link
    We observe a N×MN\times M matrix Yij=sij+ΟijY_{ij}=s_{ij}+\xi_{ij} with Οij∌N(0,1)\xi_{ij}\sim {\mathcal {N}}(0,1) i.i.d. in i,ji,j, and sij∈Rs_{ij}\in \mathbb {R}. We test the null hypothesis sij=0s_{ij}=0 for all i,ji,j against the alternative that there exists some submatrix of size n×mn\times m with significant elements in the sense that sij≄a>0s_{ij}\ge a>0. We propose a test procedure and compute the asymptotical detection boundary aa so that the maximal testing risk tends to 0 as M→∞M\to\infty, N→∞N\to\infty, p=n/N→0p=n/N\to0, q=m/M→0q=m/M\to0. We prove that this boundary is asymptotically sharp minimax under some additional constraints. Relations with other testing problems are discussed. We propose a testing procedure which adapts to unknown (n,m)(n,m) within some given set and compute the adaptive sharp rates. The implementation of our test procedure on synthetic data shows excellent behavior for sparse, not necessarily squared matrices. We extend our sharp minimax results in different directions: first, to Gaussian matrices with unknown variance, next, to matrices of random variables having a distribution from an exponential family (non-Gaussian) and, finally, to a two-sided alternative for matrices with Gaussian elements.Comment: Published in at http://dx.doi.org/10.3150/12-BEJ470 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Simple versus optimal rules as guides to policy

    Get PDF
    This paper contributes to the policy evaluation literature by developing new strategies to study alternative policy rules. We compare optimal rules to simple rules within canonical monetary policy models. In our context, an optimal rule represents the solution to an intertemporal optimization problem in which a loss function for the policymaker and an explicit model of the macroeconomy are specified. We define a simple rule to be a summary of the intuition policymakers and economists have about how a central bank should react to aggregate disturbances. The policy rules are evaluated under minimax and minimax regret criteria. These criteria force the policymaker to guard against a worst-case scenario, but in different ways. Minimax makes the worst possible model the benchmark for the policymaker, while minimax regret confronts the policymaker with uncertainty about the true model. Our results indicate that the case for a model-specific optimal rule can break down when uncertainty exists about which of several models is true. Further, we show that the assumption that the policymaker’s loss function is known can obscure policy trade-offs that exist in the short, medium, and long run. Thus, policy evaluation is more difficult once it is recognized that model and preference uncertainty can interact.

    Minimax testing of a composite null hypothesis defined via a quadratic functional in the model of regression

    Get PDF
    We consider the problem of testing a particular type of composite null hypothesis under a nonparametric multivariate regression model. For a given quadratic functional QQ, the null hypothesis states that the regression function ff satisfies the constraint Q[f]=0Q[f]=0, while the alternative corresponds to the functions for which Q[f]Q[f] is bounded away from zero. On the one hand, we provide minimax rates of testing and the exact separation constants, along with a sharp-optimal testing procedure, for diagonal and nonnegative quadratic functionals. We consider smoothness classes of ellipsoidal form and check that our conditions are fulfilled in the particular case of ellipsoids corresponding to anisotropic Sobolev classes. In this case, we present a closed form of the minimax rate and the separation constant. On the other hand, minimax rates for quadratic functionals which are neither positive nor negative makes appear two different regimes: "regular" and "irregular". In the "regular" case, the minimax rate is equal to n−1/4n^{-1/4} while in the "irregular" case, the rate depends on the smoothness class and is slower than in the "regular" case. We apply this to the issue of testing the equality of norms of two functions observed in noisy environments

    From Wald to Savage: homo economicus becomes a Bayesian statistician

    Get PDF
    Bayesian rationality is the paradigm of rational behavior in neoclassical economics. A rational agent in an economic model is one who maximizes her subjective expected utility and consistently revises her beliefs according to Bayes’s rule. The paper raises the question of how, when and why this characterization of rationality came to be endorsed by mainstream economists. Though no definitive answer is provided, it is argued that the question is far from trivial and of great historiographic importance. The story begins with Abraham Wald’s behaviorist approach to statistics and culminates with Leonard J. Savage’s elaboration of subjective expected utility theory in his 1954 classic The Foundations of Statistics. It is the latter’s acknowledged fiasco to achieve its planned goal, the reinterpretation of traditional inferential techniques along subjectivist and behaviorist lines, which raises the puzzle of how a failed project in statistics could turn into such a tremendous hit in economics. A couple of tentative answers are also offered, involving the role of the consistency requirement in neoclassical analysis and the impact of the postwar transformation of US business schools.Savage, Wald, rational behavior, Bayesian decision theory, subjective probability, minimax rule, statistical decision functions, neoclassical economics

    Nonparametric Feature Extraction from Dendrograms

    Full text link
    We propose feature extraction from dendrograms in a nonparametric way. The Minimax distance measures correspond to building a dendrogram with single linkage criterion, with defining specific forms of a level function and a distance function over that. Therefore, we extend this method to arbitrary dendrograms. We develop a generalized framework wherein different distance measures can be inferred from different types of dendrograms, level functions and distance functions. Via an appropriate embedding, we compute a vector-based representation of the inferred distances, in order to enable many numerical machine learning algorithms to employ such distances. Then, to address the model selection problem, we study the aggregation of different dendrogram-based distances respectively in solution space and in representation space in the spirit of deep representations. In the first approach, for example for the clustering problem, we build a graph with positive and negative edge weights according to the consistency of the clustering labels of different objects among different solutions, in the context of ensemble methods. Then, we use an efficient variant of correlation clustering to produce the final clusters. In the second approach, we investigate the sequential combination of different distances and features sequentially in the spirit of multi-layered architectures to obtain the final features. Finally, we demonstrate the effectiveness of our approach via several numerical studies

    Sensitivity Analysis for Multiple Comparisons in Matched Observational Studies through Quadratically Constrained Linear Programming

    Full text link
    A sensitivity analysis in an observational study assesses the robustness of significant findings to unmeasured confounding. While sensitivity analyses in matched observational studies have been well addressed when there is a single outcome variable, accounting for multiple comparisons through the existing methods yields overly conservative results when there are multiple outcome variables of interest. This stems from the fact that unmeasured confounding cannot affect the probability of assignment to treatment differently depending on the outcome being analyzed. Existing methods allow this to occur by combining the results of individual sensitivity analyses to assess whether at least one hypothesis is significant, which in turn results in an overly pessimistic assessment of a study's sensitivity to unobserved biases. By solving a quadratically constrained linear program, we are able to perform a sensitivity analysis while enforcing that unmeasured confounding must have the same impact on the treatment assignment probabilities across outcomes for each individual in the study. We show that this allows for uniform improvements in the power of a sensitivity analysis not only for testing the overall null of no effect, but also for null hypotheses on \textit{specific} outcome variables while strongly controlling the familywise error rate. We illustrate our method through an observational study on the effect of smoking on naphthalene exposure
    • 

    corecore