344 research outputs found
The Expectation Monad in Quantum Foundations
The expectation monad is introduced abstractly via two composable
adjunctions, but concretely captures measures. It turns out to sit in between
known monads: on the one hand the distribution and ultrafilter monad, and on
the other hand the continuation monad. This expectation monad is used in two
probabilistic analogues of fundamental results of Manes and Gelfand for the
ultrafilter monad: algebras of the expectation monad are convex compact
Hausdorff spaces, and are dually equivalent to so-called Banach effect
algebras. These structures capture states and effects in quantum foundations,
and also the duality between them. Moreover, the approach leads to a new
re-formulation of Gleason's theorem, expressing that effects on a Hilbert space
are free effect modules on projections, obtained via tensoring with the unit
interval.Comment: In Proceedings QPL 2011, arXiv:1210.029
Uncomputability and Undecidability in Economic Theory
Economic theory, game theory and mathematical statistics have all increasingly become algorithmic sciences. Computable Economics, Algorithmic Game Theory ([28]) and Algorithmic Statistics ([13]) are frontier research subjects. All of them, each in its own way, are underpinned by (classical) recursion theory - and its applied branches, say computational complexity theory or algorithmic information theory - and, occasionally, proof theory. These research paradigms have posed new mathematical and metamathematical questions and, inadvertently, undermined the traditional mathematical foundations of economic theory. A concise, but partial, pathway into these new frontiers is the subject matter of this paper. Interpreting the core of mathematical economic theory to be defined by General Equilibrium Theory and Game Theory, a general - but concise - analysis of the computable and decidable content of the implications of these two areas are discussed. Issues at the frontiers of macroeconomics, now dominated by Recursive Macroeconomic Theory, are also tackled, albeit ultra briefly. The point of view adopted is that of classical recursion theory and varieties of constructive mathematics.General Equilibrium Theory, Game Theory, Recursive Macro-economics, (Un)computability, (Un)decidability, Constructivity
Applications of fuzzy set theory and near vector spaces to functional analysis
We prove an original version of the Hahn-Banach theorem in the fuzzy setting. Convex
compact sets occur naturally in set-valued analysis. A question that has not been
satisfactorily dealt with in the literature is: What is the relationship between collections
of such sets and vector spaces? We thoroughly clarify this situation by making use of
R°adstr ¨om’s embedding theorem, leading up to the definition of a near vector space. We
then go on to successfully apply these results to provide an original method of proof of
Doob’s decomposition of submartingales
Asymptotic properties for a class of partially identified models
We propose inference procedures for partially identified population features for which the population identification region can be written as a transformation of the Aumann expectation of a properly defined set valued random variable (SVRV). An SVRV is a mapping that associates a set (rather than a real number) with each element of the sample space. Examples of population features in this class include sample means and best linear predictors with interval outcome data, and parameters of semiparametric binary models with interval regressor data. We extend the analogy principle to SVRVs, and show that the sample analog estimator of the population identification region is given by a transformation of a Minkowski average of SVRVs. Using the results of the mathematics literature on SVRVs, we show that this estimator converges in probability to the identification region of the model with respect to the Hausdorff distance. We then show that the Hausdorff distance between the estimator and the population identification region, when properly normalized by vn, converges in distribution to the supremum of a Gaussian process whose covariance kernel depends on parameters of the population identification region. We provide consistent bootstrap procedures to approximate this limiting distribution. Using similar arguments as those applied for vector valued random variables, we develop a methodology to test assumptions about the true identification region and to calculate the power of the test. We show that these results can be used to construct a confidence collection, that is a collection of sets that, when specified as null hypothesis for the true value of the population identification region, cannot be rejected by our test.Partial Identification, Confidence Collections, Set-Valued Random Variables.
- …