5,499 research outputs found
Multicriteria ranking using weights which minimize the score range
Various schemes have been proposed for generating a set of non-subjective weights when aggregating multiple criteria for the purposes of ranking or selecting alternatives. The maximin approach chooses the weights which maximise the lowest score (assuming there is an upper bound to scores). This is equivalent to finding the weights which minimize the maximum deviation, or range, between the worst and best scores (minimax). At first glance this seems to be an equitable way of apportioning weight, and the Rawlsian theory of justice has been cited in its support.We draw a distinction between using the maximin rule for the purpose of assessing performance, and using it for allocating resources amongst the alternatives. We demonstrate that it has a number of drawbacks which make it inappropriate for the assessment of performance. Specifically, it is tantamount to allowing the worst performers to decide the worth of the criteria so as to maximise their overall score. Furthermore, when making a selection from a list of alternatives, the final choice is highly sensitive to the removal or inclusion of alternatives whose performance is so poor that they are clearly irrelevant to the choice at hand
Shaping Social Activity by Incentivizing Users
Events in an online social network can be categorized roughly into endogenous
events, where users just respond to the actions of their neighbors within the
network, or exogenous events, where users take actions due to drives external
to the network. How much external drive should be provided to each user, such
that the network activity can be steered towards a target state? In this paper,
we model social events using multivariate Hawkes processes, which can capture
both endogenous and exogenous event intensities, and derive a time dependent
linear relation between the intensity of exogenous events and the overall
network activity. Exploiting this connection, we develop a convex optimization
framework for determining the required level of external drive in order for the
network to reach a desired activity level. We experimented with event data
gathered from Twitter, and show that our method can steer the activity of the
network more accurately than alternatives
Simple versus optimal rules as guides to policy
This paper contributes to the policy evaluation literature by developing new strategies to study alternative policy rules. We compare optimal rules to simple rules within canonical monetary policy models. In our context, an optimal rule represents the solution to an intertemporal optimization problem in which a loss function for the policymaker and an explicit model of the macroeconomy are specified. We define a simple rule to be a summary of the intuition policymakers and economists have about how a central bank should react to aggregate disturbances. The policy rules are evaluated under minimax and minimax regret criteria. These criteria force the policymaker to guard against a worst-case scenario, but in different ways. Minimax makes the worst possible model the benchmark for the policymaker, while minimax regret confronts the policymaker with uncertainty about the true model. Our results indicate that the case for a model-specific optimal rule can break down when uncertainty exists about which of several models is true. Further, we show that the assumption that the policymakerâs loss function is known can obscure policy trade-offs that exist in the short, medium, and long run. Thus, policy evaluation is more difficult once it is recognized that model and preference uncertainty can interact.
Detection of a sparse submatrix of a high-dimensional noisy matrix
We observe a matrix with i.i.d. in , and . We test the
null hypothesis for all against the alternative that there
exists some submatrix of size with significant elements in the
sense that . We propose a test procedure and compute the
asymptotical detection boundary so that the maximal testing risk tends to 0
as , , , . We prove that this
boundary is asymptotically sharp minimax under some additional constraints.
Relations with other testing problems are discussed. We propose a testing
procedure which adapts to unknown within some given set and compute the
adaptive sharp rates. The implementation of our test procedure on synthetic
data shows excellent behavior for sparse, not necessarily squared matrices. We
extend our sharp minimax results in different directions: first, to Gaussian
matrices with unknown variance, next, to matrices of random variables having a
distribution from an exponential family (non-Gaussian) and, finally, to a
two-sided alternative for matrices with Gaussian elements.Comment: Published in at http://dx.doi.org/10.3150/12-BEJ470 the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
Minimax testing of a composite null hypothesis defined via a quadratic functional in the model of regression
We consider the problem of testing a particular type of composite null
hypothesis under a nonparametric multivariate regression model. For a given
quadratic functional , the null hypothesis states that the regression
function satisfies the constraint , while the alternative
corresponds to the functions for which is bounded away from zero. On the
one hand, we provide minimax rates of testing and the exact separation
constants, along with a sharp-optimal testing procedure, for diagonal and
nonnegative quadratic functionals. We consider smoothness classes of
ellipsoidal form and check that our conditions are fulfilled in the particular
case of ellipsoids corresponding to anisotropic Sobolev classes. In this case,
we present a closed form of the minimax rate and the separation constant. On
the other hand, minimax rates for quadratic functionals which are neither
positive nor negative makes appear two different regimes: "regular" and
"irregular". In the "regular" case, the minimax rate is equal to
while in the "irregular" case, the rate depends on the smoothness class and is
slower than in the "regular" case. We apply this to the issue of testing the
equality of norms of two functions observed in noisy environments
From Wald to Savage: homo economicus becomes a Bayesian statistician
Bayesian rationality is the paradigm of rational behavior in neoclassical economics. A rational agent in an economic model is one who maximizes her subjective expected utility and consistently revises her beliefs according to Bayesâs rule. The paper raises the question of how, when and why this characterization of rationality came to be endorsed by mainstream economists. Though no definitive answer is provided, it is argued that the question is far from trivial and of great historiographic importance. The story begins with Abraham Waldâs behaviorist approach to statistics and culminates with Leonard J. Savageâs elaboration of subjective expected utility theory in his 1954 classic The Foundations of Statistics. It is the latterâs acknowledged fiasco to achieve its planned goal, the reinterpretation of traditional inferential techniques along subjectivist and behaviorist lines, which raises the puzzle of how a failed project in statistics could turn into such a tremendous hit in economics. A couple of tentative answers are also offered, involving the role of the consistency requirement in neoclassical analysis and the impact of the postwar transformation of US business schools.Savage, Wald, rational behavior, Bayesian decision theory, subjective probability, minimax rule, statistical decision functions, neoclassical economics
Nonparametric Feature Extraction from Dendrograms
We propose feature extraction from dendrograms in a nonparametric way. The
Minimax distance measures correspond to building a dendrogram with single
linkage criterion, with defining specific forms of a level function and a
distance function over that. Therefore, we extend this method to arbitrary
dendrograms. We develop a generalized framework wherein different distance
measures can be inferred from different types of dendrograms, level functions
and distance functions. Via an appropriate embedding, we compute a vector-based
representation of the inferred distances, in order to enable many numerical
machine learning algorithms to employ such distances. Then, to address the
model selection problem, we study the aggregation of different dendrogram-based
distances respectively in solution space and in representation space in the
spirit of deep representations. In the first approach, for example for the
clustering problem, we build a graph with positive and negative edge weights
according to the consistency of the clustering labels of different objects
among different solutions, in the context of ensemble methods. Then, we use an
efficient variant of correlation clustering to produce the final clusters. In
the second approach, we investigate the sequential combination of different
distances and features sequentially in the spirit of multi-layered
architectures to obtain the final features. Finally, we demonstrate the
effectiveness of our approach via several numerical studies
- âŠ