154 research outputs found

    Cross-Border Lending Contagion in Multinational Banks

    Get PDF
    We study the interdependence of lending decisions in different country branches of a multinational bank. This is done both theoretically and empirically. First, we formulate a model of a bank that delegates the management of its foreign unit to a local manager with non-transferable skills. The bank differs from other international investors due to a liquidity threshold which induces a depositor run and a regulatory action if attained. Therefore, lending decisions are influenced by delegation and precautionary motives. We then show that these two phenomena create a separate channel of shock propagation, a function of bank shareholder and manager incentives. The workings of this channel can lead to either “contagionâ€, meaning parallel reactions of the loan volumes in both countries to the parent bank home country disturbance, or standard “diversificationâ€, when the reactions of a standard international portfolio optimizer within the two country units go in opposite directions. In particular, it can happen that the impact of an exogenous shock on credit has a different sign in the “relationship†as opposed to the “arm’s-length†banking environment. Second, we construct a large sample of multinational banks and their branches/subsidiaries and look for the presence of lending contagion by panel regression methods. We obtain mixed results concerning contagion depending on the parent bank home country and the host economy of cross-border penetration. While the majority of multinational banks behave in line with the contagion effect, more than one-third do not. In addition, the presence of contagion seems to be related to the geographical location of subsidiaries.Delegation, diversification, lending contagion, multinational bank, panel regression.

    MIPaaL: Mixed Integer Program as a Layer

    Full text link
    Machine learning components commonly appear in larger decision-making pipelines; however, the model training process typically focuses only on a loss that measures accuracy between predicted values and ground truth values. Decision-focused learning explicitly integrates the downstream decision problem when training the predictive model, in order to optimize the quality of decisions induced by the predictions. It has been successfully applied to several limited combinatorial problem classes, such as those that can be expressed as linear programs (LP), and submodular optimization. However, these previous applications have uniformly focused on problems from specific classes with simple constraints. Here, we enable decision-focused learning for the broad class of problems that can be encoded as a Mixed Integer Linear Program (MIP), hence supporting arbitrary linear constraints over discrete and continuous variables. We show how to differentiate through a MIP by employing a cutting planes solution approach, which is an exact algorithm that iteratively adds constraints to a continuous relaxation of the problem until an integral solution is found. We evaluate our new end-to-end approach on several real world domains and show that it outperforms the standard two phase approaches that treat prediction and prescription separately, as well as a baseline approach of simply applying decision-focused learning to the LP relaxation of the MIP

    Aggregation of Information and Beliefs in Prediction Markets

    Get PDF

    The Theory of Money and Financial Institutions

    Get PDF
    A sketch of a game theoretic approach to the Theory of Money and Financial Institutions is presented in a nontechnical, nonmathematical manner. The detailed argument and specifics are presented in previous articles and in a forthcoming book.

    Survey of quantitative investment strategies in the Russian stock market : Special interest in tactical asset allocation

    Get PDF
    Russia’s financial markets have been an uncharted area when it comes to exploring the performance of investment strategies based on modern portfolio theory. In this thesis, we focus on the country’s stock market and study whether profitable investments can be made while at the same time taking uncertainties, risks, and dependencies into account. We also pay particular interest in tactical asset allocation. The benefit of this approach is that we can utilize time series forecasting methods to produce trading signals in addition to optimization methods. We use two datasets in our empirical applications. The first one consists of nine sectoral indices covering the period from 2008 to 2017, and the other includes altogether 42 stocks listed on the Moscow Exchange covering the years 2011 – 2017. The strategies considered have been divided into five sections. In the first part, we study classical and robust mean-risk portfolios and the modeling of transaction costs. We find that the expected return should be maximized per unit expected shortfall while simultaneously requiring that each asset contributes equally to the portfolio’s tail risk. Secondly, we show that using robust covariance estimators can improve the risk-adjusted returns of minimum variance portfolios. Thirdly, we note that robust optimization techniques are best suited for conservative investors due to the low volatility allocations they produce. In the second part, we employ statistical factor models to estimate higher-order comoments and demonstrate the benefit of the proposed method in constructing risk-optimal and expected utility-maximizing portfolios. In the third part, we utilize the Almgren–Chriss framework and sort the expected returns according to the assumed momentum anomaly. We discover that this method produces stable allocations performing exceptionally well in the market upturn. In the fourth part, we show that forecasts produced by VECM and GARCH models can be used profitably in optimizations based on the Black–Litterman, copula opinion pooling, and entropy pooling models. In the final part, we develop a wealth protection strategy capable of timing market changes thanks to the return predictions based on an ARIMA model. Therefore, it can be stated that it has been possible to make safe and profitable investments in the Russian stock market even when reasonable transaction costs have been taken into account. We also argue that market inefficiencies could have been exploited by structuring statistical arbitrage and other tactical asset allocation-related strategies.Venäjän rahoitusmarkkinat ovat olleet kartoittamatonta aluetta tutkittaessa moderniin portfolioteoriaan pohjautuvien sijoitusstrategioiden käyttäytymistä. Tässä tutkielmassa keskitymme maan osakemarkkinoihin ja tarkastelemme, voidaanko taloudellisesti kannattavia sijoituksia tehdä otettaessa samalla huomioon epävarmuudet, riskit ja riippuvuudet. Kiinnitämme erityistä huomiota myös taktiseen varojen kohdentamiseen. Tämän lähestymistavan etuna on, että optimointimenetelmien lisäksi voimme hyödyntää aikasarjaennustamisen menetelmiä kaupankäyntisignaalien tuottamiseksi. Empiirisissä sovelluksissa käytämme kahta data-aineistoa. Ensimmäinen koostuu yhdeksästä teollisuusindeksistä kattaen ajanjakson 2008–2017, ja toinen sisältää 42 Moskovan pörssiin listattua osaketta kattaen vuodet 2011–2017. Tarkasteltavat strategiat on puolestaan jaoteltu viiteen osioon. Ensimmäisessä osassa tarkastelemme klassisia ja robusteja riski-tuotto -portfolioita sekä kaupankäyntikustannusten mallintamista. Havaitsemme, että odotettua tuottoa on syytä maksimoida suhteessa odotettuun vajeeseen edellyttäen samalla, että jokainen osake lisää sijoitussalkun häntäriskiä yhtä suurella osuudella. Toiseksi osoitamme, että minimivarianssiportfolioiden riskikorjattuja tuottoja voidaan parantaa robusteilla kovarianssiestimaattoreilla. Kolmanneksi toteamme robustien optimointitekniikoiden soveltuvan parhaiten konservatiivisille sijoittajille niiden tuottamien matalan volatiliteetin allokaatioiden ansiosta. Toisessa osassa hyödynnämme tilastollisia faktorimalleja korkeampien yhteismomenttien estimoinnissa ja havainnollistamme ehdotetun metodin hyödyllisyyttä riskioptimaalisten sekä odotettua hyötyä maksimoivien salkkujen rakentamisessa. Kolmannessa osassa käytämme Almgren–Chrissin viitekehystä ja asetamme odotetut tuotot suuruusjärjestykseen oletetun momentum-anomalian mukaisesti. Havaitsemme, että menetelmä tuottaa vakaita allokaatioita menestyen erityisen hyvin noususuhdanteessa. Neljännessä osassa osoitamme, että VECM- että GARCH-mallien tuottamia ennusteita voidaan hyödyntää kannattavasti niin Black–Littermanin malliin kuin kopulanäkemysten ja entropian poolaukseenkin perustuvissa optimoinneissa. Viimeisessä osassa laadimme varallisuuden suojausstrategian, joka kykenee ajoittamaan markkinoiden muutoksia ARIMA-malliin perustuvien tuottoennusteiden ansiosta. Voidaan siis todeta, että Venäjän osakemarkkinoilla on ollut mahdollista tehdä turvallisia ja tuottavia sijoituksia myös silloin kun kohtuulliset kaupankäyntikustannukset on huomioitu. Toiseksi väitämme, että markkinoiden tehottomuutta on voitu hyödyntää suunnittelemalla tilastolliseen arbitraasiin ja muihin taktiseen varojen allokointiin pohjautuvia strategioita

    A Mechanism Design Approach to Bandwidth Allocation in Tactical Data Networks

    Get PDF
    The defense sector is undergoing a phase of rapid technological advancement, in the pursuit of its goal of information superiority. This goal depends on a large network of complex interconnected systems - sensors, weapons, soldiers - linked through a maze of heterogeneous networks. The sheer scale and size of these networks prompt behaviors that go beyond conglomerations of systems or `system-of-systems\u27. The lack of a central locus and disjointed, competing interests among large clusters of systems makes this characteristic of an Ultra Large Scale (ULS) system. These traits of ULS systems challenge and undermine the fundamental assumptions of today\u27s software and system engineering approaches. In the absence of a centralized controller it is likely that system users may behave opportunistically to meet their local mission requirements, rather than the objectives of the system as a whole. In these settings, methods and tools based on economics and game theory (like Mechanism Design) are likely to play an important role in achieving globally optimal behavior, when the participants behave selfishly. Against this background, this thesis explores the potential of using computational mechanisms to govern the behavior of ultra-large-scale systems and achieve an optimal allocation of constrained computational resources Our research focusses on improving the quality and accuracy of the common operating picture through the efficient allocation of bandwidth in tactical data networks among self-interested actors, who may resort to strategic behavior dictated by self-interest. This research problem presents the kind of challenges we anticipate when we have to deal with ULS systems and, by addressing this problem, we hope to develop a methodology which will be applicable for ULS system of the future. We build upon the previous works which investigate the application of auction-based mechanism design to dynamic, performance-critical and resource-constrained systems of interest to the defense community. In this thesis, we consider a scenario where a number of military platforms have been tasked with the goal of detecting and tracking targets. The sensors onboard a military platform have a partial and inaccurate view of the operating picture and need to make use of data transmitted from neighboring sensors in order to improve the accuracy of their own measurements. The communication takes place over tactical data networks with scarce bandwidth. The problem is compounded by the possibility that the local goals of military platforms might not be aligned with the global system goal. Such a scenario might occur in multi-flag, multi-platform military exercises, where the military commanders of each platform are more concerned with the well-being of their own platform over others. Therefore there is a need to design a mechanism that efficiently allocates the flow of data within the network to ensure that the resulting global performance maximizes the information gain of the entire system, despite the self-interested actions of the individual actors. We propose a two-stage mechanism based on modified strictly-proper scoring rules, with unknown costs, whereby multiple sensor platforms can provide estimates of limited precisions and the center does not have to rely on knowledge of the actual outcome when calculating payments. In particular, our work emphasizes the importance of applying robust optimization techniques to deal with the uncertainty in the operating environment. We apply our robust optimization - based scoring rules algorithm to an agent-based model framework of the combat tactical data network, and analyze the results obtained. Through the work we hope to demonstrate how mechanism design, perched at the intersection of game theory and microeconomics, is aptly suited to address one set of challenges of the ULS system paradigm - challenges not amenable to traditional system engineering approaches

    Automatic machine learning:methods, systems, challenges

    Get PDF

    Automatic machine learning:methods, systems, challenges

    Get PDF
    This open access book presents the first comprehensive overview of general methods in Automatic Machine Learning (AutoML), collects descriptions of existing systems based on these methods, and discusses the first international challenge of AutoML systems. The book serves as a point of entry into this quickly-developing field for researchers and advanced students alike, as well as providing a reference for practitioners aiming to use AutoML in their work. The recent success of commercial ML applications and the rapid growth of the field has created a high demand for off-the-shelf ML methods that can be used easily and without expert knowledge. Many of the recent machine learning successes crucially rely on human experts, who select appropriate ML architectures (deep learning architectures or more traditional ML workflows) and their hyperparameters; however the field of AutoML targets a progressive automation of machine learning, based on principles from optimization and machine learning itself

    Input significance analysis: feature ranking through synaptic weights manipulation for ANNS-based classifiers

    Get PDF
    Due to the ANNs architecture, the ISA methods that can manipulate synaptic weights selectedare Connection Weights (CW) and Garson’s Algorithm (GA). The ANNs-based classifiers thatcan provide such manipulation are Multi-Layer Perceptron (MLP) and Evolving Fuzzy NeuralNetworks (EFuNNs). The goals for this work are firstly to identify which of the twoclassifiers works best with the filtered/ranked data, secondly is to test the FR method by usinga selected dataset taken from the UCI Machine Learning Repository and in an onlineenvironment and lastly to attest the FR results by using another selected dataset taken fromthe same source and in the same environment. There are three groups of experimentsconducted to accomplish these goals. The results are promising when FR is applied, someefficiency and accuracy are noticeable compared to the original data.Keywords: artificial neural networks, input significance analysis; feature selection; featureranking; connection weights; Garson’s algorithm; multi-layer perceptron; evolving fuzzyneural networks
    • …
    corecore