4,348 research outputs found

    Designing capital-ratio triggers for Contingent Convertibles

    Get PDF
    Contingent Convertible (CoCo) bonds represent a novel category of debt financial instruments, recently introduced into the financial landscape. Their primary role is to bolster financial stability by maintaining healthy capital levels for the issuing entity. This is achieved by converting the bond principal into equity or writing it down once the minimum capital ratios are violated. CoCos aim to recapitalize the bank before it is on the brink of collapse, to avoid a state bailout at a huge cost to the taxpayer. Under normal circumstances, CoCo bonds operate as ordinary coupon-paying bonds, which only in case of insufficient capital ratios are converted into equity of the issuer. However, the CoCo market has struggled to expand over the years, and the recent tumult involving Credit Suisse and its enforced CoCo write-off has underscored these challenges. The focus of this research work is on the first hand to understand the reasons for this failure, and, on the other hand, to modify its underlying design in order to restore its intended purpose: to act as a liquidity buffer, strengthening the capital structure of the issuing firm. The cornerstone of the proposed work is the design of a self-adaptive model for leverage. This model features an automatic conversion that does not hinge on the judgment of regulatory authorities. Notably, it allows the issuer's debt-to-assets ratio to remain within predetermined boundaries, where the likelihood of default on outstanding liabilities remains minimal. The pricing of the proposed instruments is difficult as the conversion is dynamic. We view CoCos essentially as a portfolio of different financial instruments. This treatment makes it easier to analyze their response to different market events that may or may not trigger their conversion to equity. We provide evidence of the model's effectiveness and discuss it implications of its implementation, in light of the regulatory environment and best market practices.Skilyrt breytanleg (e. Contingent Convertible, skammstafað CoCo) skuldabréf eru nýstárleg gerð af fjármálagerningum sem nýlega komu fram á sjónarsvið fjármálamarkaða. Helsta hlutverk þeirra er að e a fjármálastöðugleika með því að viðhalda hæfilegum eiginfjárgrunni fyrir útgefendur þeirra. Þetta er gert með því að umbreyta höfuðstól skuldabréfs í hlutafé eða með því færa þau niður þegar krafa um eiginfjárhlutföll eru rofin. CoCo hefur það markmið að endurfjármagna bankann áður en hann fellur og þar með koma í veg fyrir björgunaraðgerðir af hálfu ríkisins, sem hefur í för með sér mikinn kostnað fyrir skattgreiðendur. Undir venjulegum kringumstæðum virka CoCo skuldabréf eins og hefðbundin arðgreiðslu- skuldabréf, sem einungis er breytt í hlutafé þegar eiginfjárhlutföll útgefanda þeirra eru ekki nægjanleg. Eigi að síður hefur markaður fyrir CoCo átt erfitt uppdráttar í gegnum tíðina og hefur nýlegur titringur í kringum Credit Suisse og þvingaðar afskriftir þeirra á CoCo skuldabréfum ýtt enn frekar undir erfiðleikana. Helsti tilgangur þessarar rannsóknar er tvíþættur. Annars vegar er ætlunin að skilja hvers vegna CoCo hefur ekki átt meiri velgengni að fagna en raun ber vitni. Hins vegar er henni ætlað að breyta grundvallarhönnun CoCo í þeim tilgangi að endurheimta upprunalegan tilgang þeirra: sem er að vera stuðpúði lausafés sem styrkir fjármagnsskipan útgáfu fyrirtækisins. Hornsteinn verkefnisins er hönnun á líkani með sjálfaðlögunarhæfni með tilliti til skuldsetningarhlutfalls. Líkanið býr yfir sjálfvirkri umbreytingu sem ræðst því ekki af reglum eftirlitsyfirvalda. Það gerir útgefanda því kleift að viðhalda hlutfalli skulda á móti eignum innan fyrirfram skilgreindra marka, þar sem líkur á vanskilum vegna útistandandi skuldbindinga haldast í lágmarki. Verðlagning gerninganna sem lagðir eru til í rannsókninni er þó vandasöm þar sem umbreytingin er dýnamísk. Í meginatriðum verður litið á CoCos sem safn ólíkra fjármálagerninga. Með þessari aðferð er hægt að greina viðbrögð þeirra við mismunandi markaðsatburðum sem geta mögulega hrint af stað umbreytingu yfir í hlutafé. Sýnt verður fram á skilvirkni líkansins ásamt því að álykta um innleiðingu þess með tilliti til regluverks og bestu markaðsvenja.RU Research Fund Icelandic Research Fun

    Essays on Panel Data Prediction Models

    Get PDF
    Forward-looking analysis is valuable for policymakers as they need effective strategies to mitigate imminent risks and potential challenges. Panel data sets contain time series information over a number of cross-sectional units and are known to have superior predictive abilities in comparison to time series only models. This PhD thesis develops novel panel data methods to contribute to the advancement of short-term forecasting and nowcasting of macroeconomic and environmental variables. The two most important highlights of this thesis are the use of cross-sectional dependence in panel data forecasting and to allow for timely predictions and ‘nowcasts’.Although panel data models have been found to provide better predictions in many empirical scenarios, forecasting applications so far have not included cross-sectional dependence. On the other hand, cross-sectional dependence is well-recognised in large panels and has been explicitly modelled in previous causal studies. A substantial portion of this thesis is devoted to developing cross-sectional dependence in panel models suited to diverse empirical scenarios. The second important aspect of this work is to integrate the asynchronous release schedules of data within and across panel units into the panel models. Most of the thesis emphasises the pseudo-real-time predictions with efforts to estimate the model on the data that has been released at the time of predictions, thus trying to replicate the realistic circumstances of delayed data releases.Linear, quantile and non-linear panel models are developed to predict a range of targets both in terms of their meaning and method of measurement. Linear models include panel mixed-frequency vector-autoregression and bridge equation set-ups which predict GDP growth, inflation and CO2 emissions. Panel quantile regressions and latent variable discrete choice models predict growth-at-risk and extreme episodes of cross-border capital flows, respectively. The datasets include both international cross-country panels as well as regional subnational panels. Depending on the nature of the model and the prediction targets, different precision criteria evaluate the accuracy of the models in out-of-sample settings. The generated predictions beat respective standard benchmarks in a more timely fashion

    Strategy Tripod Perspective on the Determinants of Airline Efficiency in A Global Context: An Application of DEA and Tobit Analysis

    Get PDF
    The airline industry is vital to contemporary civilization since it is a key player in the globalization process: linking regions, fostering global commerce, promoting tourism and aiding economic and social progress. However, there has been little study on the link between the operational environment and airline efficiency. Investigating the amalgamation of institutions, organisations and strategic decisions is critical to understanding how airlines operate efficiently. This research aims to employ the strategy tripod perspective to investigate the efficiency of a global airline sample using a non-parametric linear programming method (data envelopment analysis [DEA]). Using a Tobit regression, the bootstrapped DEA efficiency change scores are further regressed to determine the drivers of efficiency. The strategy tripod is employed to assess the impact of institutions, industry and resources on airline efficiency. Institutions are measured by global indices of destination attractiveness; industry, including competition, jet fuel and business model; and finally, resources, such as the number of full-time employees, alliances, ownership and connectivity. The first part of the study uses panel data from 35 major airlines, collected from their annual reports for the period 2011 to 2018, and country attractiveness indices from global indicators. The second part of the research involves a qualitative data collection approach and semi-structured interviews with experts in the field to evaluate the impact of COVID-19 on the first part’s significant findings. The main findings reveal that airlines operate at a highly competitive level regardless of their competition intensity or origin. Furthermore, the unpredictability of the environment complicates airline operations. The efficiency drivers of an airline are partially determined by its type of business model, its degree of cooperation and how fuel cost is managed. Trade openness has a negative influence on airline efficiency. COVID-19 has toppled the airline industry, forcing airlines to reconsider their business model and continuously increase cooperation. Human resources, sustainability and alternative fuel sources are critical to airline survival. Finally, this study provides some evidence for the practicality of the strategy tripod and hints at the need for a broader approach in the study of international strategies

    Quantifying Equity Risk Premia: Financial Economic Theory and High-Dimensional Statistical Methods

    Get PDF
    The overarching question of this dissertation is how to quantify the unobservable risk premium of a stock when its return distribution varies over time. The first chapter, titled “Theory-based versus machine learning-implied stock risk premia”, starts with a comparison of two competing strands of the literature. The approach advocated by Martin and Wagner (2019) relies on financial economic theory to derive a closed-form approximation of conditional risk premia using information embedded in the prices of European options. The other approach, exemplified by the study of Gu et al. (2020), draws on the flexibility of machine learning methods and vast amounts of historical data to determine the unknown functional form. The goal of this study is to determine which of the two approaches produces more accurate measurements of stock risk premia. In addition, we present a novel hybrid approach that employs machine learning to overcome the approximation errors induced by the theory-based approach. We find that our hybrid approach is competitive especially at longer investment horizons. The second chapter, titled “The uncertainty principle in asset pricing”, introduces a representation of the conditional capital asset pricing model (CAPM) in which the betas and the equity premium are jointly characterized by the information embedded in option prices. A unique feature of our model is that its implied components represent valid measurements of their physical counterparts without the need for any further risk adjustment. Moreover, because the model’s time-varying parameters are directly observable, the model can be tested without any of the complications that typically arise from statistical estimation. One of the main empirical findings is that the well-known flat relationship between average predicted and realized excess returns of beta-sorted portfolios can be explained by the uncertainty governing market excess returns. In the third chapter, titled “Multi-task learning in cross-sectional regressions”, we challenge the way in which cross-sectional regressions are used to test factor models with time-varying loadings. More specifically, we extend the procedure by Fama and MacBeth (1973) by systematically selecting stock characteristics using a combination of l1- and l2-regularization, known as the multi-task Lasso, and addressing the bias that is induced by selection via repeated sample splitting. In the empirical part of this chapter, we apply our testing procedure to the option-implied CAPM from chapter two, and find that, while variants of the momentum effect lead to a rejection of the model, the implied beta is by far the most important predictor of cross-sectional return variation

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Machine Learning-powered Course Allocation

    Full text link
    We introduce a machine learning-powered course allocation mechanism. Concretely, we extend the state-of-the-art Course Match mechanism with a machine learning-based preference elicitation module. In an iterative, asynchronous manner, this module generates pairwise comparison queries that are tailored to each individual student. Regarding incentives, our machine learning-powered course match (MLCM) mechanism retains the attractive strategyproofness in the large property of Course Match. Regarding welfare, we perform computational experiments using a simulator that was fitted to real-world data. Our results show that, compared to Course Match, MLCM increases average student utility by 4%-9% and minimum student utility by 10%-21%, even with only ten comparison queries. Finally, we highlight the practicability of MLCM and the ease of piloting it for universities currently using Course Match

    EQUI-VOCAL: Synthesizing Queries for Compositional Video Events from Limited User Interactions [Technical Report]

    Full text link
    We introduce EQUI-VOCAL: a new system that automatically synthesizes queries over videos from limited user interactions. The user only provides a handful of positive and negative examples of what they are looking for. EQUI-VOCAL utilizes these initial examples and additional ones collected through active learning to efficiently synthesize complex user queries. Our approach enables users to find events without database expertise, with limited labeling effort, and without declarative specifications or sketches. Core to EQUI-VOCAL's design is the use of spatio-temporal scene graphs in its data model and query language and a novel query synthesis approach that works on large and noisy video data. Our system outperforms two baseline systems -- in terms of F1 score, synthesis time, and robustness to noise -- and can flexibly synthesize complex queries that the baselines do not support.Comment: This is an extended technical report for the following paper: "Enhao Zhang, Maureen Daum, Dong He, Brandon Haynes, Ranjay Krishna, and Magdalena Balazinska. EQUI-VOCAL: Synthesizing Queries for Compositional Video Events from Limited User Interactions. PVLDB, 16(11): 2714-2727, 2023. doi:10.14778/3611479.3611482

    Subgroup discovery for structured target concepts

    Get PDF
    The main object of study in this thesis is subgroup discovery, a theoretical framework for finding subgroups in data—i.e., named sub-populations— whose behaviour with respect to a specified target concept is exceptional when compared to the rest of the dataset. This is a powerful tool that conveys crucial information to a human audience, but despite past advances has been limited to simple target concepts. In this work we propose algorithms that bring this framework to novel application domains. We introduce the concept of representative subgroups, which we use not only to ensure the fairness of a sub-population with regard to a sensitive trait, such as race or gender, but also to go beyond known trends in the data. For entities with additional relational information that can be encoded as a graph, we introduce a novel measure of robust connectedness which improves on established alternative measures of density; we then provide a method that uses this measure to discover which named sub-populations are more well-connected. Our contributions within subgroup discovery crescent with the introduction of kernelised subgroup discovery: a novel framework that enables the discovery of subgroups on i.i.d. target concepts with virtually any kind of structure. Importantly, our framework additionally provides a concrete and efficient tool that works out-of-the-box without any modification, apart from specifying the Gramian of a positive definite kernel. To use within kernelised subgroup discovery, but also on any other kind of kernel method, we additionally introduce a novel random walk graph kernel. Our kernel allows the fine tuning of the alignment between the vertices of the two compared graphs, during the count of the random walks, while we also propose meaningful structure-aware vertex labels to utilise this new capability. With these contributions we thoroughly extend the applicability of subgroup discovery and ultimately re-define it as a kernel method.Der Hauptgegenstand dieser Arbeit ist die Subgruppenentdeckung (Subgroup Discovery), ein theoretischer Rahmen für das Auffinden von Subgruppen in Daten—d. h. benannte Teilpopulationen—deren Verhalten in Bezug auf ein bestimmtes Targetkonzept im Vergleich zum Rest des Datensatzes außergewöhnlich ist. Es handelt sich hierbei um ein leistungsfähiges Instrument, das einem menschlichen Publikum wichtige Informationen vermittelt. Allerdings ist es trotz bisherigen Fortschritte auf einfache Targetkonzepte beschränkt. In dieser Arbeit schlagen wir Algorithmen vor, die diesen Rahmen auf neuartige Anwendungsbereiche übertragen. Wir führen das Konzept der repräsentativen Untergruppen ein, mit dem wir nicht nur die Fairness einer Teilpopulation in Bezug auf ein sensibles Merkmal wie Rasse oder Geschlecht sicherstellen, sondern auch über bekannte Trends in den Daten hinausgehen können. Für Entitäten mit zusätzlicher relationalen Information, die als Graph kodiert werden kann, führen wir ein neuartiges Maß für robuste Verbundenheit ein, das die etablierten alternativen Dichtemaße verbessert; anschließend stellen wir eine Methode bereit, die dieses Maß verwendet, um herauszufinden, welche benannte Teilpopulationen besser verbunden sind. Unsere Beiträge in diesem Rahmen gipfeln in der Einführung der kernelisierten Subgruppenentdeckung: ein neuartiger Rahmen, der die Entdeckung von Subgruppen für u.i.v. Targetkonzepten mit praktisch jeder Art von Struktur ermöglicht. Wichtigerweise, unser Rahmen bereitstellt zusätzlich ein konkretes und effizientes Werkzeug, das ohne jegliche Modifikation funktioniert, abgesehen von der Angabe des Gramian eines positiv definitiven Kernels. Für den Einsatz innerhalb der kernelisierten Subgruppentdeckung, aber auch für jede andere Art von Kernel-Methode, führen wir zusätzlich einen neuartigen Random-Walk-Graph-Kernel ein. Unser Kernel ermöglicht die Feinabstimmung der Ausrichtung zwischen den Eckpunkten der beiden unter-Vergleich-gestelltenen Graphen während der Zählung der Random Walks, während wir auch sinnvolle strukturbewusste Vertex-Labels vorschlagen, um diese neue Fähigkeit zu nutzen. Mit diesen Beiträgen erweitern wir die Anwendbarkeit der Subgruppentdeckung gründlich und definieren wir sie im Endeffekt als Kernel-Methode neu

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    Uncertainty Quantification in Machine Learning for Engineering Design and Health Prognostics: A Tutorial

    Full text link
    On top of machine learning models, uncertainty quantification (UQ) functions as an essential layer of safety assurance that could lead to more principled decision making by enabling sound risk assessment and management. The safety and reliability improvement of ML models empowered by UQ has the potential to significantly facilitate the broad adoption of ML solutions in high-stakes decision settings, such as healthcare, manufacturing, and aviation, to name a few. In this tutorial, we aim to provide a holistic lens on emerging UQ methods for ML models with a particular focus on neural networks and the applications of these UQ methods in tackling engineering design as well as prognostics and health management problems. Toward this goal, we start with a comprehensive classification of uncertainty types, sources, and causes pertaining to UQ of ML models. Next, we provide a tutorial-style description of several state-of-the-art UQ methods: Gaussian process regression, Bayesian neural network, neural network ensemble, and deterministic UQ methods focusing on spectral-normalized neural Gaussian process. Established upon the mathematical formulations, we subsequently examine the soundness of these UQ methods quantitatively and qualitatively (by a toy regression example) to examine their strengths and shortcomings from different dimensions. Then, we review quantitative metrics commonly used to assess the quality of predictive uncertainty in classification and regression problems. Afterward, we discuss the increasingly important role of UQ of ML models in solving challenging problems in engineering design and health prognostics. Two case studies with source codes available on GitHub are used to demonstrate these UQ methods and compare their performance in the life prediction of lithium-ion batteries at the early stage and the remaining useful life prediction of turbofan engines
    corecore