365 research outputs found

    A simple scheme for allocating capital in a foreign exchange proprietary trading firm

    Get PDF
    We present a model of capital allocation in a foreign exchange proprietary trading firm. The owner allocates capital to individual traders, who operate within strict risk limits. Traders specialize in individual currencies, but are given discretion over their choice of trading rule. The owner provides the simple formula that determines position sizes – a formula that does not require estimation of the firm-level covariance matrix. We provide supporting empirical evidence of excess risk-adjusted returns to the firm-level portfolio, and we discuss a modification of the model in which the owner dictates the choice of trading rule

    Detection of brain functional-connectivity difference in post-stroke patients using group-level covariance modeling

    Get PDF
    Functional brain connectivity, as revealed through distant correlations in the signals measured by functional Magnetic Resonance Imaging (fMRI), is a promising source of biomarkers of brain pathologies. However, establishing and using diagnostic markers requires probabilistic inter-subject comparisons. Principled comparison of functional-connectivity structures is still a challenging issue. We give a new matrix-variate probabilistic model suitable for inter-subject comparison of functional connectivity matrices on the manifold of Symmetric Positive Definite (SPD) matrices. We show that this model leads to a new algorithm for principled comparison of connectivity coefficients between pairs of regions. We apply this model to comparing separately post-stroke patients to a group of healthy controls. We find neurologically-relevant connection differences and show that our model is more sensitive that the standard procedure. To the best of our knowledge, these results are the first report of functional connectivity differences between a single-patient and a group and thus establish an important step toward using functional connectivity as a diagnostic tool

    Direct Nonlinear Shrinkage Estimation of Large-Dimensional Covariance Matrices

    Full text link
    This paper introduces a nonlinear shrinkage estimator of the covariance matrix that does not require recovering the population eigenvalues first. We estimate the sample spectral density and its Hilbert transform directly by smoothing the sample eigenvalues with a variable-bandwidth kernel. Relative to numerically inverting the so-called QuEST function, the main advantages of direct kernel estimation are: (1) it is much easier to comprehend because it is analogous to kernel density estimation; (2) it is only twenty lines of code in Matlab - as opposed to thousands - which makes it more verifiable and customizable; (3) it is 200 times faster without significant loss of accuracy; and (4) it can handle matrices of a dimension larger by a factor of ten. Even for dimension 10,000, the code runs in less than two minutes on a desktop computer; this makes the power of nonlinear shrinkage as accessible to applied statisticians as the one of linear shrinkage

    Adaptive Evolutionary Clustering

    Full text link
    In many practical applications of clustering, the objects to be clustered evolve over time, and a clustering result is desired at each time step. In such applications, evolutionary clustering typically outperforms traditional static clustering by producing clustering results that reflect long-term trends while being robust to short-term variations. Several evolutionary clustering algorithms have recently been proposed, often by adding a temporal smoothness penalty to the cost function of a static clustering method. In this paper, we introduce a different approach to evolutionary clustering by accurately tracking the time-varying proximities between objects followed by static clustering. We present an evolutionary clustering framework that adaptively estimates the optimal smoothing parameter using shrinkage estimation, a statistical approach that improves a naive estimate using additional information. The proposed framework can be used to extend a variety of static clustering algorithms, including hierarchical, k-means, and spectral clustering, into evolutionary clustering algorithms. Experiments on synthetic and real data sets indicate that the proposed framework outperforms static clustering and existing evolutionary clustering algorithms in many scenarios.Comment: To appear in Data Mining and Knowledge Discovery, MATLAB toolbox available at http://tbayes.eecs.umich.edu/xukevin/affec

    Testing linear hypotheses in high-dimensional regressions

    Full text link
    For a multivariate linear model, Wilk's likelihood ratio test (LRT) constitutes one of the cornerstone tools. However, the computation of its quantiles under the null or the alternative requires complex analytic approximations and more importantly, these distributional approximations are feasible only for moderate dimension of the dependent variable, say p20p\le 20. On the other hand, assuming that the data dimension pp as well as the number qq of regression variables are fixed while the sample size nn grows, several asymptotic approximations are proposed in the literature for Wilk's \bLa including the widely used chi-square approximation. In this paper, we consider necessary modifications to Wilk's test in a high-dimensional context, specifically assuming a high data dimension pp and a large sample size nn. Based on recent random matrix theory, the correction we propose to Wilk's test is asymptotically Gaussian under the null and simulations demonstrate that the corrected LRT has very satisfactory size and power, surely in the large pp and large nn context, but also for moderately large data dimensions like p=30p=30 or p=50p=50. As a byproduct, we give a reason explaining why the standard chi-square approximation fails for high-dimensional data. We also introduce a new procedure for the classical multiple sample significance test in MANOVA which is valid for high-dimensional data.Comment: Accepted 02/2012 for publication in "Statistics". 20 pages, 2 pages and 2 table

    Accounting for risk of non linear portfolios: a novel Fourier approach

    Full text link
    The presence of non linear instruments is responsible for the emergence of non Gaussian features in the price changes distribution of realistic portfolios, even for Normally distributed risk factors. This is especially true for the benchmark Delta Gamma Normal model, which in general exhibits exponentially damped power law tails. We show how the knowledge of the model characteristic function leads to Fourier representations for two standard risk measures, the Value at Risk and the Expected Shortfall, and for their sensitivities with respect to the model parameters. We detail the numerical implementation of our formulae and we emphasizes the reliability and efficiency of our results in comparison with Monte Carlo simulation.Comment: 10 pages, 12 figures. Final version accepted for publication on Eur. Phys. J.

    The merit of high-frequency data in portfolio allocation

    Get PDF
    This paper addresses the open debate about the usefulness of high-frequency (HF) data in large-scale portfolio allocation. Daily covariances are estimated based on HF data of the S&P 500 universe employing a blocked realized kernel estimator. We propose forecasting covariance matrices using a multi-scale spectral decomposition where volatilities, correlation eigenvalues and eigenvectors evolve on different frequencies. In an extensive out-of-sample forecasting study, we show that the proposed approach yields less risky and more diversified portfolio allocations as prevailing methods employing daily data. These performance gains hold over longer horizons than previous studies have shown

    Towards a Cure for BCI Illiteracy

    Get PDF
    Brain–Computer Interfaces (BCIs) allow a user to control a computer application by brain activity as acquired, e.g., by EEG. One of the biggest challenges in BCI research is to understand and solve the problem of “BCI Illiteracy”, which is that BCI control does not work for a non-negligible portion of users (estimated 15 to 30%). Here, we investigate the illiteracy problem in BCI systems which are based on the modulation of sensorimotor rhythms. In this paper, a sophisticated adaptation scheme is presented which guides the user from an initial subject-independent classifier that operates on simple features to a subject-optimized state-of-the-art classifier within one session while the user interacts the whole time with the same feedback application. While initial runs use supervised adaptation methods for robust co-adaptive learning of user and machine, final runs use unsupervised adaptation and therefore provide an unbiased measure of BCI performance. Using this approach, which does not involve any offline calibration measurement, good performance was obtained by good BCI participants (also one novice) after 3–6 min of adaptation. More importantly, the use of machine learning techniques allowed users who were unable to achieve successful feedback before to gain significant control over the BCI system. In particular, one participant had no peak of the sensory motor idle rhythm in the beginning of the experiment, but could develop such peak during the course of the session (and use voluntary modulation of its amplitude to control the feedback application)
    corecore