1,077 research outputs found

    CVXR: An R Package for Disciplined Convex Optimization

    Get PDF
    CVXR is an R package that provides an object-oriented modeling language for convex optimization, similar to CVX, CVXPY, YALMIP, and Convex.jl. It allows the user to formulate convex optimization problems in a natural mathematical syntax rather than the restrictive form required by most solvers. The user specifies an objective and set of constraints by combining constants, variables, and parameters using a library of functions with known mathematical properties. CVXR then applies signed disciplined convex programming (DCP) to verify the problem's convexity. Once verified, the problem is converted into standard conic form using graph implementations and passed to a cone solver such as ECOS or SCS. We demonstrate CVXR's modeling framework with several applications.Comment: 34 pages, 9 figure

    Non-stationary data-driven computational portfolio theory and algorithms

    Get PDF
    The aim of the dissertation is the development of a data-driven portfolio optimization framework beyond standard assumptions. Investment decisions are either based on the opinion of a human expert, who evaluates information about companies, or on statistical models. The most famous methods based on statistics are the Markowitz portfolio model and utility maximization. All statistical methods assume certain knowledge over the underlying distribution of the returns, either by imposing Gaussianity, by expecting complete knowledge of the distribution or by inferring sufficiently good estimators of parameters. Yet in practice, all methods suffer from incomplete knowledge, small sample sizes and the problem that parameters might be varying over time. A new, model-free approach to the portfolio optimization problem allowing for time-varying dynamics in the price processes is presented. The methods proposed in this work are designed to solve the problem with less a-priori assumptions than standard methods, like assumptions on the distribution of the price processes or assumptions on time-invariant statistical properties. The new approach introduces two new parameters and a method to chose these based on principles of information theory. An analysis of different approaches to incorporate additional information is performed before a straightforward approach to the out-of-sample application is introduced. The structure of the numerical problem is obtained directly from the problem of portfolio optimization, resulting in a system of objective function and constraints known from non-stationary time series analysis. The incorporation of transaction costs allows to naturally obtain regularization that is normally included for numerical reasons. The applicability and the numerical feasibility of the method are demonstrated in a low-dimensional example in-sample and in a high-dimensional example in- and out-of-sample in an environment with mixed transaction costs. The performance of both examples is measured and compared to standard methods, as the Markowitz approach and to methods based on techniques to analyse non- stationary data, like Hidden Markov Models

    Dynamic portfolio optimization with inverse covariance clustering

    Get PDF
    Market conditions change continuously. However, in portfolio investment strategies, it is hard to account for this intrinsic non-stationarity. In this paper, we propose to address this issue by using the Inverse Covariance Clustering (ICC) method to identify inherent market states and then integrate such states into a dynamic portfolio optimization process. Extensive experiments across three different markets, NASDAQ, FTSE and HS300, over a period of ten years, demonstrate the advantages of our proposed algorithm, termed Inverse Covariance Clustering-Portfolio Optimization (ICC-PO). The core of the ICC-PO methodology concerns the identification and clustering of market states from the analytics of past data and the forecasting of the future market state. It is therefore agnostic to the specific portfolio optimization method of choice. By applying the same portfolio optimization technique on a ICC temporal cluster, instead of the whole train period, we show that one can generate portfolios with substantially higher Sharpe Ratios, which are statistically more robust and resilient with great reductions in the maximum loss in extreme situations. This is shown to be consistent across markets, periods, optimization methods and selection of portfolio assets

    Affinity-Based Reinforcement Learning : A New Paradigm for Agent Interpretability

    Get PDF
    The steady increase in complexity of reinforcement learning (RL) algorithms is accompanied by a corresponding increase in opacity that obfuscates insights into their devised strategies. Methods in explainable artificial intelligence seek to mitigate this opacity by either creating transparent algorithms or extracting explanations post hoc. A third category exists that allows the developer to affect what agents learn: constrained RL has been used in safety-critical applications and prohibits agents from visiting certain states; preference-based RL agents have been used in robotics applications and learn state-action preferences instead of traditional reward functions. We propose a new affinity-based RL paradigm in which agents learn strategies that are partially decoupled from reward functions. Unlike entropy regularisation, we regularise the objective function with a distinct action distribution that represents a desired behaviour; we encourage the agent to act according to a prior while learning to maximise rewards. The result is an inherently interpretable agent that solves problems with an intrinsic affinity for certain actions. We demonstrate the utility of our method in a financial application: we learn continuous time-variant compositions of prototypical policies, each interpretable by its action affinities, that are globally interpretable according to customers’ financial personalities. Our method combines advantages from both constrained RL and preferencebased RL: it retains the reward function but generalises the policy to match a defined behaviour, thus avoiding problems such as reward shaping and hacking. Unlike Boolean task composition, our method is a fuzzy superposition of different prototypical strategies to arrive at a more complex, yet interpretable, strategy.publishedVersio

    Optimal portfolio allocation with uncertain covariance matrix

    Full text link
    In this paper, we explore the portfolio allocation problem involving an uncertain covariance matrix. We calculate the expected value of the Constant Absolute Risk Aversion (CARA) utility function, marginalized over a distribution of covariance matrices. We show that marginalization introduces a logarithmic dependence on risk, as opposed to the linear dependence assumed in the mean-variance approach. Additionally, it leads to a decrease in the allocation level for higher uncertainties. Our proposed method extends the mean-variance approach by considering the uncertainty associated with future covariance matrices and expected returns, which is important for practical applications

    Covariance Estimation: The GLM and Regularization Perspectives

    Get PDF
    Finding an unconstrained and statistically interpretable reparameterization of a covariance matrix is still an open problem in statistics. Its solution is of central importance in covariance estimation, particularly in the recent high-dimensional data environment where enforcing the positive-definiteness constraint could be computationally expensive. We provide a survey of the progress made in modeling covariance matrices from two relatively complementary perspectives: (1) generalized linear models (GLM) or parsimony and use of covariates in low dimensions, and (2) regularization or sparsity for high-dimensional data. An emerging, unifying and powerful trend in both perspectives is that of reducing a covariance estimation problem to that of estimating a sequence of regression problems. We point out several instances of the regression-based formulation. A notable case is in sparse estimation of a precision matrix or a Gaussian graphical model leading to the fast graphical LASSO algorithm. Some advantages and limitations of the regression-based Cholesky decomposition relative to the classical spectral (eigenvalue) and variance-correlation decompositions are highlighted. The former provides an unconstrained and statistically interpretable reparameterization, and guarantees the positive-definiteness of the estimated covariance matrix. It reduces the unintuitive task of covariance estimation to that of modeling a sequence of regressions at the cost of imposing an a priori order among the variables. Elementwise regularization of the sample covariance matrix such as banding, tapering and thresholding has desirable asymptotic properties and the sparse estimated covariance matrix is positive definite with probability tending to one for large samples and dimensions.Comment: Published in at http://dx.doi.org/10.1214/11-STS358 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore