223 research outputs found

    Learning the Efficient Frontier

    Full text link
    The efficient frontier (EF) is a fundamental resource allocation problem where one has to find an optimal portfolio maximizing a reward at a given level of risk. This optimal solution is traditionally found by solving a convex optimization problem. In this paper, we introduce NeuralEF: a fast neural approximation framework that robustly forecasts the result of the EF convex optimization problem with respect to heterogeneous linear constraints and variable number of optimization inputs. By reformulating an optimization problem as a sequence to sequence problem, we show that NeuralEF is a viable solution to accelerate large-scale simulation while handling discontinuous behavior.Comment: Accepted at the Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS 2023

    Discovering the hidden structure of financial markets through bayesian modelling

    Get PDF
    Understanding what is driving the price of a financial asset is a question that is currently mostly unanswered. In this work we go beyond the classic one step ahead prediction and instead construct models that create new information on the behaviour of these time series. Our aim is to get a better understanding of the hidden structures that drive the moves of each financial time series and thus the market as a whole. We propose a tool to decompose multiple time series into economically-meaningful variables to explain the endogenous and exogenous factors driving their underlying variability. The methodology we introduce goes beyond the direct model forecast. Indeed, since our model continuously adapts its variables and coefficients, we can study the time series of coefficients and selected variables. We also present a model to construct the causal graph of relations between these time series and include them in the exogenous factors. Hence, we obtain a model able to explain what is driving the move of both each specific time series and the market as a whole. In addition, the obtained graph of the time series provides new information on the underlying risk structure of this environment. With this deeper understanding of the hidden structure we propose novel ways to detect and forecast risks in the market. We investigate our results with inferences up to one month into the future using stocks, FX futures and ETF futures, demonstrating its superior performance according to accuracy of large moves, longer-term prediction and consistency over time. We also go in more details on the economic interpretation of the new variables and discuss the created graph structure of the market.Open Acces

    Heterogeneous Agent Model With Real Business Cycle With Application In Optimal Tax Policy And Social Welfare Reform

    Get PDF
    In this paper, we develop a dynamic stochastic general equilibrium (DSGE) model with financial friction and incomplete risk-sharing among overlapping-generation (OLG) heterogeneous households. The economy is embedded with taxation system and social security system calibrated to current U.S. economy and tax policy, as well as elastic labor supply. Our baseline model can match wealth-income disparity and moment conditions in financial market as well as macroeconomic variables. In baseline setting, the mean risk-free rate is 1.36%\% per year, the unlevered equity premium is 4.08%\%, and Gini coefficient for labor earning and total income is 0.65 and 0.51 respectively. The equity risk premium is driven by incomplete risk sharing among household and participation barrier to equity market. Furthermore, our model can act as workhorse model for policy experiment including debt policy, wealth tax reform, capital income tax reform and social security system reform. This paper could be beneficial to policy maker to understand the impact of policy change to macroeconomy and household-level behavior

    Méthode par gabarit à ordre variable pour la prédiction de séries chronologiques financières

    Get PDF
    La prédiction de séries chronologiques exhibant des changements de comportements à travers le temps est un problème fondamental dans les domaines du traitement de signal et de la reconnaissance automatique. Dans la majorité des applications de prédiction de séries chronologiques financières, ajuster proprement la paramétrisation d'un modèle ou d'un modèle d'ensemble est un problème connu pour sa difficulté. Lorsqu'il y a des changements de régime, c.-à-d.: des changements des propriétés statistiques inattendues de ces séries à travers le temps, les modèles actuels ne sont pas capables d'adapter leur paramétrisation et la qualité de leur prédictions se voit dégradée. Cette thèse propose une approche formelle pour aborder ces changements de comportements au moyen d'une automatisation de la capacité de modèles existants a varier dynamiquement leurs structures graphiques et à modéliser plusieurs structures graphiques simultanément. Lorsque cette approche est appliquée à grande échelle, les modèles pouvant changer leurs structures graphiques dynamiquement ont tendance à être plus robustes et permettent de réduire le temps de calcul nécessaire pour produire des modèles d'ensemble sans compromettre leur niveau de précision

    Probabilistic forecasting and interpretability in power load applications

    Get PDF
    Power load forecasting is a fundamental tool in the modern electric power generation and distribution industry. The ability to accurately predict future behaviours of the grid, both in the short and long term, is vital in order to adequately meet demand and scaling requirements. Over the past few decades Machine Learning (ML) has taken center stage in this context, with an emphasis on short-term forecasting using both traditional ML as well as Deep-Learning (DL) models. In this dissertation, we approach forecasting not only from the angle of improving predictive accuracy, but also with the goal of gaining interpretability of the behavior of the electric load through models that can offer deeper insight and extract useful information. Specifically for this reason, we focus on the use of probabilistic models, which can shed light on valuable information about the underlying structure of the data through the interpretation of their parameters. Furthermore, the use of probabilistic models intrinsically provides us with a way of measuring the confidence in our predictions through the predictive variance. Throughout the dissertation we shall focus on two specific ideas within the greater field of power load forecasting, which will comprise our main contributions. The first contribution addresses the notion of power load profiling, in which ML is used to identify profiles that represent distinct behaviours in the power load data. These profiles have two fundamental uses: first, they can be valuable interpretability tools, as they offer simple yet powerful descriptions of the underlying patterns hidden in the time series data; second, they can improve forecasting accuracy by allowing us to train specialized predictive models tailored to each individual profile. However, in most of the literature profiling and prediction are typically performed sequentially, with an initial clustering algorithm identifying profiles in the input data and a subsequent prediction stage where independent regressors are trained on each profile. In this dissertation we propose a novel probabilistic approach that couples both the profiling and predictive stages by jointly fitting a clustering model and multiple linear regressors. In training, both the clustering of the input data and the fitting of the regressors to the output data influence each other through a joint likelihood function, resulting in a set of clusters that is much better suited to the prediction task and is therefore much more relevant and informative. The model is tested on two real world power load databases, provided by the regional transmission organizations ISO New England and PJM Interconect LLC, in a 24-hour ahead prediction scenario. We achieve better performance than other state of the art approaches while arriving at more consistent and informative profiles of the power load data. Our second contribution applies the idea of multi-task prediction to the context of 24- hour ahead forecasting. In a multi-task prediction problem there are multiple outputs that are assumed to be correlated in some way. Identifying and exploiting these relationships can result in much better performance as well as a better understanding of a multi-task problem. Even though the load forecasting literature is scarce on this subject, it seems obvious to assume that there exist important correlations between the outputs in a 24-hour prediction scenario. To tackle this, we develop a multi-task Gaussian process model that addresses the relationships between the outputs by assuming the existence of, and subsequently estimating, both an inter-task covariance matrix and a multitask noise covariance matrix that capture these important interactions. Our model improves on other multi-task Gaussian process approaches in that it greatly reduces the number of parameters to be inferred while maintaining the interpretability provided by the estimation and visualization of the multi-task covariance matrices. We first test our model on a wide selection of general synthetic and real world multi-task problems with excellent results. We then apply it to a 24-hour ahead power load forecasting scenario using the ISO New England database, outperforming other standard multi-task Gaussian processes and providing very useful visual information through the estimation of the covariance matrices.La predicción de carga es una herramenta fundamental en la industria moderna de la generación y distribución de energía eléctrica. La capacidad de estimar con precisión el comportamiento futuro de la red, tanto a corto como a largo plazo, es vital para poder cumplir con los requisitos de demanda y escalado en las diferentes infraestructuras. A lo largo de las últimas décadas, el Aprendizaje Automático o Machine Learning (ML) ha tomado un papel protagonista en este contexto, con un marcado énfasis en la predicción a corto plazo utilizando tanto modelos de ML tradicionales como redes Deep-Learning (DL). En esta tesis planteamos la predicción de carga no sólo con el objetivo de mejorar las prestaciones en la estimación, sino también de ganar en la interpretabilidad del comportamiento de la carga eléctrica a través de modelos que puedan extraer información útil. Por este motivo nos centraremos en modelos probabilísticos, que por su naturaleza pueden arrojar luz sobre la estructura oculta de los datos a través de la interpretación de sus parámetros. Además el uso de modelos probabilísticos nos proporciona de forma intrínseca una medida de confianza en la predicción a través de la estimación de la varianza predictiva. A lo largo de la tesis nos centraremos en dos ideas concretas en el contexto de la predicción de carga eléctrica, que conformarán nuestras aportaciónes principales. Nuestra primera contribución plantea la idea del perfilado de la carga eléctrica, donde se utilizan modelos de ML para identificar perfiles que representan comportamientos diferenciables en los datos de carga. Estos perfiles tienen dos usos fundamentales: en primer lugar son herramientas útiles para la interpretabilidad del problema ya que ofrecen descripciones sencillas de los posibles patrones ocultos en los datos; en segundo lugar, los perfiles pueden ser utilizados para mejorar las prestaciones de estimación, ya que permiten entrenar varios modelos predictivos especializados en cada perfil individual. Sin embargo, en la literatura el perfilado y la predicción se presentan como eventos en cascada, donde primero se entrena un algoritmo de clústering para detectar perfiles que luego son utilizados para entrenar los modelos de regresión. En esta tesis proponemos un modelo probabilístico novedoso que acopla las dos fases ajustando simultáneamente un modelo de clústering y los correspondientes modelos de regresión. Durante el entrenamiento ambas partes del modelo se influencian entre sí a través de una función de verosimilitud conjunta, resultando en un conjunto de clusters que está mucho mejor adaptado a la tarea de predicción y es por tanto mucho más relevante e informativo. En los experimentos, el modelo es entrenado con datos reales de carga eléctrica provinientes de dos bases de datos públicas proporcionadas por las organizaciónde de transmisión regional estadounidenses ISO New England y PJM Interconect LLC, en un escenario de predicción a 24 horas. El modelo obtiene mejores prestaciones que otros algoritmos competitivos, proporcionando al mismo tiempo un conjunto de perfiles del comportamiento de la carga más consistente e informativo. Nuestra segunda contribución aplica la idea de predicción multi-tarea al contexto de la estimación a 24 horas. Los problemas multi-tarea presentan múltiples salidas que se asume están de alguna forma correladas entre sí. Identificar y aprovechar estas relaciones puede incurrir en un incremento de las prestaciones así como un mejor entendimiento del problema multi-tarea. A pesar de que la literatura de predicción de carga es escasa en este sentido, parece lógico pensar que deben existir importantes correlaciones entre las salidas de un escenario de predicción a 24 horas. Por este motivo hemos desarrollado un proceso Gaussiano multi-tarea que recoge las relaciones entre salidas asumiendo la existencia de de una covarianza inter-tarea así como un ruido multi-tarea. Nuestro modelo ofrece mejoras con respecto a otras formulaciones de procesos Gaussianos multi-tarea al reducir el número de parámetros a estimar mientras se mantiene la interpretabilidad proporcionada por la estimación y visualizacion de las matrices de covarianza y ruido inter-tarea. Primero, en la fase de experimentos nuestro modelo es puesto a prueba sobre una batería de bases de datos tanto sintéticas como reales, obteniendo muy buenos resultados. A continuación se aplica el modelo a un problema de predicción de carga a 24 horas utilizando la base de datos de ISO New England, batiendo en prestaciones a otros procesos Gaussianos multi-tarea y proporcionando información visual útil mediante la estimación de las matrices de covarianza inter-tarea.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidente: Pablo Martínez Olmos.- Secretario: Pablo Muñoz Moreno.- Vocal: José Palacio

    Essays on spillover effects of economic and geopolitical uncertainty : a thesis submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Finance at Massey University, Auckland (Albany), New Zealand

    Get PDF
    Listed in 2020 Dean's List of Exceptional ThesesWe are living in an age of uncertainty. While uncertainty can originate from multiple sources, the most prominent ones include economic policies and geopolitical conditions. Over the past two decades, geopolitical and economic policy uncertainties have risen dramatically around the globe, raising concerns among policymakers and financial market participants about the cross-country and cross-market transmission effects of these uncertainties. Consequently, a growing body of literature has emerged around the measurement of uncertainty, the cross-country transmission of uncertainty, and the spillover effects of a given uncertainty for financial markets. By offering several advantages over other measures of uncertainty, news-based uncertainty indicators have become increasingly popular since the seminal work by Baker, Bloom, and Davis (2016). As the transmission of geopolitical uncertainty across countries and that of economic policy uncertainty to financial markets carry important implications for risk-management and policy-making decisions, it is crucial to understand and explain the behavior of these transmission mechanisms. By relying on news-based indicators of geopolitical and economic policy uncertainty, this thesis contributes to the literature by exploring the potential determinants of uncertainty transmission to stock markets as well as across countries. The first essay estimates and explains the cross-country transmission of geopolitical uncertainty (GPU). Using the news-based GPU indices for a sample of emerging economies along with the United States, the spillover models are employed to measure the pairwise and system-wide transmission of GPU. A substantial amount of GPU transmission is found across the sample countries, with some countries and geographical clusters are being more prominent than others. A cross-sectional analysis, motivated by a gravity model framework, is further utilized to explain the pairwise transmission of GPU, which reveals that bilateral linkages and country-specific factors play an essential role in driving the transmission of GPU. The overall findings continue to hold even after considering the short- and long-term time horizons. The findings of this essay may help predict the trajectory of GPU from one country to another, which is an essential input for the assessment of cross-border investment appraisals as well as international stability initiatives. A bulk of the literature has examined the impact of US uncertainty on international stock markets without paying much attention to the correlation between the US and the other stock markets. Motivated by this void in the extant literature, the second essay examines the role of US uncertainty in driving the US stock market’s spillovers to global stock markets, after controlling for the stock market correlation. To this end, I consider a wide range of stock markets around the world, as well as three news-based uncertainties from the US, namely economic policy uncertainty (EPU), equity market uncertainty, and equity market volatility. I find that the US uncertainties significantly cause the spillovers from the US to global stock markets. This causality from US uncertainties depends upon certain country-characteristics. Specifically, the US uncertainties explain better the spillovers between US and target countries, when those countries have a higher degree of financial openness, trade linkage with the US, and vulnerable fiscal position. Improved levels of stock market development in the target countries, however, mitigate their stock markets’ vulnerability to the US uncertainty shocks. The essay offers potential insights and implications for investors and policymakers. Inspired by the concerns that small open economies may well be more vulnerable to foreign uncertainty than to local uncertainty, the third essay focuses on New Zealand, which is a small open economy. This essay introduces a weekly EPU index for New Zealand and, and examines the return and volatility spillovers from NZ EPU and US EPU on the aggregate (NZSE) and sectoral indices of New Zealand stock market. Overall, the findings suggest that NZ equity sectors and NZSE receive much stronger and more pronounced spillover effects from US EPU compared to the local counterpart. While the return spillovers from both EPUs are somewhat similar yet limited to just a few sectors, the volatility spillovers from US EPU on NZ sectors outstrip those from the NZ EPU. For volatility spillovers, the domestically oriented sectors are relatively more vulnerable to NZ EPU, while those having export/import concentration with the US are mainly susceptible to US EPU. The findings of this essay may be useful to investors seeking sectoral diversification opportunities across New Zealand and the US

    A memory-based method to select the number of relevant components in Principal Component Analysis

    Get PDF
    We propose a new data-driven method to select the optimal number of relevant components in Principal Component Analysis (PCA). This new method applies to correlation matrices whose time autocorrelation function decays more slowly than an exponential, giving rise to long memory effects. In comparison with other available methods present in the literature, our procedure does not rely on subjective evaluations and is computationally inexpensive. The underlying basic idea is to use a suitable factor model to analyse the residual memory after sequentially removing more and more components, and stopping the process when the maximum amount of memory has been accounted for by the retained components. We validate our methodology on both synthetic and real financial data, and find in all cases a clear and computationally superior answer entirely compatible with available heuristic criteria, such as cumulative variance and cross-validation.Comment: 29 pages, publishe
    corecore