20,556 research outputs found
Informative Data Projections: A Framework and Two Examples
Methods for Projection Pursuit aim to facilitate the visual exploration of
high-dimensional data by identifying interesting low-dimensional projections. A
major challenge is the design of a suitable quality metric of projections,
commonly referred to as the projection index, to be maximized by the Projection
Pursuit algorithm. In this paper, we introduce a new information-theoretic
strategy for tackling this problem, based on quantifying the amount of
information the projection conveys to a user given their prior beliefs about
the data. The resulting projection index is a subjective quantity, explicitly
dependent on the intended user. As a useful illustration, we developed this
idea for two particular kinds of prior beliefs. The first kind leads to PCA
(Principal Component Analysis), shining new light on when PCA is (not)
appropriate. The second kind leads to a novel projection index, the
maximization of which can be regarded as a robust variant of PCA. We show how
this projection index, though non-convex, can be effectively maximized using a
modified power method as well as using a semidefinite programming relaxation.
The usefulness of this new projection index is demonstrated in comparative
empirical experiments against PCA and a popular Projection Pursuit method
A Model of Minimal Probabilistic Belief Revision
A probabilistic belief revision function assigns to every initial probabilistic belief and every observable event some revised probabilistic belief that only attaches positive probability to states in this event. We propose three axioms for belief revision functions: (1) linearity, meaning that if the decision maker observes that the true state is in {a,b}, and hence state c is impossible, then the proportions of c''s initial probability that are shifted to a and b, respectively, should be independent of c''s initial probability; (2) transitivity, stating that if the decision maker deems belief β equally similar to states a and b, and deems β equally similar to states b and c, then he should deem β equally similar to states a and c; (3) information-order independence, stating that the way in which information is received should not matter for the eventual revised belief. We show that a belief revision function satisfies the three axioms above if and only if there is some linear one-to-one function Ď, transforming the belief simplex into a polytope that is closed under orthogonal projections, such that the belief revision function satisfies minimal belief revision with respect to Ď. By the latter, we mean that the decision maker, when having initial belief βâ and observing the event E, always chooses the revised belief βâ that attaches positive probability only to states in E and for which Ď(βâ) has minimal Euclidean distance to Ď(βâ).microeconomics ;
Recommended from our members
A quantum theoretical explanation for probability judgment errors
A quantum probability model is introduced and used to explain human probability judgment errors including the conjunction, disjunction, inverse, and conditional fallacies, as well as unpacking effects and partitioning effects. Quantum probability theory is a general and coherent theory based on a set of (von Neumann) axioms which relax some of the constraints underlying classic (Kolmogorov) probability theory. The quantum model is compared and contrasted with other competing explanations for these judgment errors including the representativeness heuristic, the averaging model, and a memory retrieval model for probability judgments. The quantum model also provides ways to extend Bayesian, fuzzy set, and fuzzy trace theories. We conclude that quantum information processing principles provide a viable and promising new way to understand human judgment and reasoning
Distributed Regression in Sensor Networks: Training Distributively with Alternating Projections
Wireless sensor networks (WSNs) have attracted considerable attention in
recent years and motivate a host of new challenges for distributed signal
processing. The problem of distributed or decentralized estimation has often
been considered in the context of parametric models. However, the success of
parametric methods is limited by the appropriateness of the strong statistical
assumptions made by the models. In this paper, a more flexible nonparametric
model for distributed regression is considered that is applicable in a variety
of WSN applications including field estimation. Here, starting with the
standard regularized kernel least-squares estimator, a message-passing
algorithm for distributed estimation in WSNs is derived. The algorithm can be
viewed as an instantiation of the successive orthogonal projection (SOP)
algorithm. Various practical aspects of the algorithm are discussed and several
numerical simulations validate the potential of the approach.Comment: To appear in the Proceedings of the SPIE Conference on Advanced
Signal Processing Algorithms, Architectures and Implementations XV, San
Diego, CA, July 31 - August 4, 200
- âŚ