2,982 research outputs found

    Ambiguity, Information Quality and Asset Pricing

    Get PDF
    When ambiguity averse investors process news of uncertain quality, they act as if they take a worst-case assessment of quality. As a result, they react more strongly to bad news than to good news. They also dislike assets for which information quality is poor, especially when the underlying fundamentals are volatile. These effects induce skewness in asset returns and induce ambiguity premia that depend on idiosyncratic risk in fundamentals. Moreover, shocks to information quality can have persistent negative effects on prices even if fundamentals do not change. This helps to explain the reaction of markets to events like 9/11/2001.ambiguity, information quality, asset pricing, idiosyncratic risk, negatively skewed returns

    Learning Under Ambiguity

    Get PDF
    This paper considers learning when the distinction between risk and ambiguity matters. It first describes thought experiments, dynamic variants of those provided by Ellsberg, that highlight a sense in which the Bayesian learning model is extreme - it models agents who are implausibly ambitious about what they can learn in complicated environments. The paper then provides a generalization of the Bayesian model that accommodates the intuitive choices in the thought experiments. In particular, the model allows decision-makersā€™ confidence about the environment to change ā€” along with beliefs ā€” as they learn. A calibrated portfolio choice application shows how this property induces a trend towards more stock market participation and investment.ambiguity, learning, noisy signals, ambiguous signals, quality information, portfolio choice, portfolio diversification, Ellsberg Paradox

    Learning Under Ambiguity

    Get PDF
    This paper considers learning when the distinction between risk and ambiguity (Knightian uncertainty) matters. Working within the framework of recursive multiple-priors utility, the paper formulates a counterpart of the Bayesian model of learning about an uncertain parameter from conditionally i.i.d. signals. Ambiguous signals capture responses to information that cannot be captured by noisy signals. They induce nonmonotonic changes in agent confidence and prevent ambiguity from vanishing in the limit. In a dynamic portfolio choice model, learning about ambiguous returns leads to endogenous stock market participation costs that depend on past market performance. Hedging of ambiguity provides a new reason why the investment horizon matters for portfolio choice.ambiguity, learning, noisy signals, ambiguous signals, quality information, portfolio choice, portfolio diversification, Ellsberg Paradox

    Minimax Theory for High-dimensional Gaussian Mixtures with Sparse Mean Separation

    Full text link
    While several papers have investigated computationally and statistically efficient methods for learning Gaussian mixtures, precise minimax bounds for their statistical performance as well as fundamental limits in high-dimensional settings are not well-understood. In this paper, we provide precise information theoretic bounds on the clustering accuracy and sample complexity of learning a mixture of two isotropic Gaussians in high dimensions under small mean separation. If there is a sparse subset of relevant dimensions that determine the mean separation, then the sample complexity only depends on the number of relevant dimensions and mean separation, and can be achieved by a simple computationally efficient procedure. Our results provide the first step of a theoretical basis for recent methods that combine feature selection and clustering

    Density-sensitive semisupervised inference

    Full text link
    Semisupervised methods are techniques for using labeled data (X1,Y1),ā€¦,(Xn,Yn)(X_1,Y_1),\ldots,(X_n,Y_n) together with unlabeled data Xn+1,ā€¦,XNX_{n+1},\ldots,X_N to make predictions. These methods invoke some assumptions that link the marginal distribution PXP_X of X to the regression function f(x). For example, it is common to assume that f is very smooth over high density regions of PXP_X. Many of the methods are ad-hoc and have been shown to work in specific examples but are lacking a theoretical foundation. We provide a minimax framework for analyzing semisupervised methods. In particular, we study methods based on metrics that are sensitive to the distribution PXP_X. Our model includes a parameter Ī±\alpha that controls the strength of the semisupervised assumption. We then use the data to adapt to Ī±\alpha.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1092 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Feature Selection For High-Dimensional Clustering

    Full text link
    We present a nonparametric method for selecting informative features in high-dimensional clustering problems. We start with a screening step that uses a test for multimodality. Then we apply kernel density estimation and mode clustering to the selected features. The output of the method consists of a list of relevant features, and cluster assignments. We provide explicit bounds on the error rate of the resulting clustering. In addition, we provide the first error bounds on mode based clustering.Comment: 11 pages, 2 figure

    New Spontaneous Model of Fibrodysplasia Ossificans Progressiva

    Get PDF
    We report the first known example of spontaneous, naturally occurring fibrodysplasia ossificans progressiva (FOP) in a mammal. The Southeast Asian mouse deer of the genus _Tragulus_ (Artiodactyla: Tragulidae) have an osseous sheath covering the lower back and upper thigh region consistent with the clinical definition of FOP. This heterotophic bone deposition is sex related apparently with a genetic basis - it only occurs in males and is lacking in females; it is present in all adults males, including both wild obtained and zoo bred animals. _Tragulus_ may offer the opportunity to examine many of the disease's most significant attributes experimentally

    Efficient Sparse Clustering of High-Dimensional Non-spherical Gaussian Mixtures

    Full text link
    We consider the problem of clustering data points in high dimensions, i.e. when the number of data points may be much smaller than the number of dimensions. Specifically, we consider a Gaussian mixture model (GMM) with non-spherical Gaussian components, where the clusters are distinguished by only a few relevant dimensions. The method we propose is a combination of a recent approach for learning parameters of a Gaussian mixture model and sparse linear discriminant analysis (LDA). In addition to cluster assignments, the method returns an estimate of the set of features relevant for clustering. Our results indicate that the sample complexity of clustering depends on the sparsity of the relevant feature set, while only scaling logarithmically with the ambient dimension. Additionally, we require much milder assumptions than existing work on clustering in high dimensions. In particular, we do not require spherical clusters nor necessitate mean separation along relevant dimensions.Comment: 11 pages, 1 figur
    • ā€¦
    corecore