38 research outputs found

    An HJB Approach to a General Continuous-Time Mean-Variance Stochastic Control Problem

    Get PDF
    A general continuous mean-variance problem is considered for a diffusion controlled process where the reward functional has an integral and a terminal-time component. The problem is transformed into a superposition of a static and a dynamic optimization problem. The value function of the latter can be considered as the solution to a degenerate HJB equation either in the viscosity or in the Sobolev sense (after a regularization) under suitable assumptions and with implications with regards to the optimality of strategies. There is a useful interplay between the two approaches – viscosity and Sobolev

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Numerical Solutions of Two-factor Hamilton-Jacobi-Bellman Equations in Finance

    Get PDF
    In this thesis, we focus on solving multidimensional HJB equations which are derived from optimal stochastic control problems in the financial market. We develop a fully implicit, unconditionally monotone finite difference numerical scheme. Consequently, there are no time step restrictions due to stability considerations, and the fully implicit method has essentially the same complexity per step as the explicit method. The main difficulty in designing a discretization scheme is development of a monotonicity preserving approximation of cross derivative terms in the PDE. We primarily use a wide stencil based on a local coordination rotation. The analysis rigorously show that our numerical scheme is l∞l_\infty stable, consistent in the viscosity sense, and monotone. Therefore, our numerical scheme guarantees convergence to the viscosity solution. Firstly, our numerical schemes are applied to pricing two factor options under an uncertain volatility model. For this application, a hybrid scheme which uses the fixed point stencil as much as possible is developed to take advantage of its accuracy and computational efficiency. Secondly, using our numerical method, we study the problem of optimal asset allocation where the risky asset follows stochastic volatility. Finally, we utilize our numerical scheme to carry out an optimal static hedge, in the case of an uncertain local volatility model

    Designing the Liver Allocation Hierarchy: Incorporating Equity and Uncertainty

    Get PDF
    Liver transplantation is the only available therapy for any acute or chronic condition resulting in irreversible liver dysfunction. The liver allocation system in the U.S. is administered by the United Network for Organ Sharing (UNOS), a scientific and educational nonprofit organization. The main components of the organ procurement and transplant network are Organ Procurement Organizations (OPOs), which are collections of transplant centers responsible for maintaining local waiting lists, harvesting donated organs and carrying out transplants. Currently in the U.S., OPOs are grouped into 11 regions to facilitate organ allocation, and a three-tier mechanism is utilized that aims to reduce organ preservation time and transport distance to maintain organ quality, while giving sicker patients higher priority. Livers are scarce and perishable resources that rapidly lose viability, which makes their transport distance a crucial factor in transplant outcomes. When a liver becomes available, it is matched with patients on the waiting list according to a complex mechanism that gives priority to patients within the harvesting OPO and region. Transplants at the regional level accounted for more than 50% of all transplants since 2000.This dissertation focuses on the design of regions for liver allocation hierarchy, and includes optimization models that incorporate geographic equity as well as uncertainty throughout the analysis. We employ multi-objective optimization algorithms that involve solving parametric integer programs to balance two possibly conflicting objectives in the system: maximizing efficiency, as measured by the number of viability adjusted transplants, and maximizing geographic equity, as measured by the minimum rate of organ flow into individual OPOs from outside of their own local area. Our results show that efficiency improvements of up to 6% or equity gains of about 70% can be achieved when compared to the current performance of the system by redesigning the regional configuration for the national liver allocation hierarchy.We also introduce a stochastic programming framework to capture the uncertainty of the system by considering scenarios that correspond to different snapshots of the national waiting list and maximize the expected benefit from liver transplants under this stochastic view of the system. We explore many algorithmic and computational strategies including sampling methods, column generation strategies, branching and integer-solution generation procedures, to aid the solution process of the resulting large-scale integer programs. We also explore an OPO-based extension to our two-stage stochastic programming framework that lends itself to more extensive computational testing. The regional configurations obtained using these models are estimated to increase expected life-time gained per transplant operation by up to 7% when compared to the current system.This dissertation also focuses on the general question of designing efficient algorithms that combine column and cut generation to solve large-scale two-stage stochastic linear programs. We introduce a flexible method to combine column generation and the L-shaped method for two-stage stochastic linear programming. We explore the performance of various algorithm designs that employ stabilization subroutines for strengthening both column and cut generation to effectively avoid degeneracy. We study two-stage stochastic versions of the cutting stock and multi-commodity network flow problems to analyze the performances of algorithms in this context

    Quantification d’incertitude sur fronts de Pareto et stratĂ©gies pour l’optimisation bayĂ©sienne en grande dimension, avec applications en conception automobile

    Get PDF
    This dissertation deals with optimizing expensive or time-consuming black-box functionsto obtain the set of all optimal compromise solutions, i.e. the Pareto front. In automotivedesign, the evaluation budget is severely limited by numerical simulation times of the considered physical phenomena. In this context, it is common to resort to “metamodels” (models of models) of the numerical simulators, especially using Gaussian processes. They enable adding sequentially new observations while balancing local search and exploration. Complementing existing multi-objective Expected Improvement criteria, we propose to estimate the position of the whole Pareto front along with a quantification of the associated uncertainty, from conditional simulations of Gaussian processes. A second contribution addresses this problem from a different angle, using copulas to model the multi-variate cumulative distribution function. To cope with a possibly high number of variables, we adopt the REMBO algorithm. From a randomly selected direction, defined by a matrix, it allows a fast optimization when only a few number of variables are actually influential, but unknown. Several improvements are proposed, such as a dedicated covariance kernel, a selection procedure for the low dimensional domain and of the random directions, as well as an extension to the multi-objective setup. Finally, an industrial application in car crash-worthiness demonstrates significant benefits in terms of performance and number of simulations required. It has also been used to test the R package GPareto developed during this thesis.Cette thĂšse traite de l’optimisation multiobjectif de fonctions coĂ»teuses, aboutissant Ă  laconstruction d’un front de Pareto reprĂ©sentant l’ensemble des compromis optimaux. En conception automobile, le budget d’évaluations est fortement limitĂ© par les temps de simulation numĂ©rique des phĂ©nomĂšnes physiques considĂ©rĂ©s. Dans ce contexte, il est courant d’avoir recours Ă  des « mĂ©tamodĂšles » (ou modĂšles de modĂšles) des simulateurs numĂ©riques, en se basant notamment sur des processus gaussiens. Ils permettent d’ajouter sĂ©quentiellement des observations en conciliant recherche locale et exploration. En complĂ©ment des critĂšres d’optimisation existants tels que des versions multiobjectifs du critĂšre d’amĂ©lioration espĂ©rĂ©e, nous proposons d’estimer la position de l’ensemble du front de Pareto avec une quantification de l’incertitude associĂ©e, Ă  partir de simulations conditionnelles de processus gaussiens. Une deuxiĂšme contribution reprend ce problĂšme Ă  partir de copules. Pour pouvoir traiter le cas d’un grand nombre de variables d’entrĂ©es, nous nous basons sur l’algorithme REMBO. Par un tirage alĂ©atoire directionnel, dĂ©fini par une matrice, il permet de trouver un optimum rapidement lorsque seules quelques variables sont rĂ©ellement influentes (mais inconnues). Plusieurs amĂ©liorations sont proposĂ©es, elles comprennent un noyau de covariance dĂ©diĂ©, une sĂ©lection du domaine de petite dimension et des directions alĂ©atoires mais aussi l’extension au casmultiobjectif. Enfin, un cas d’application industriel en crash a permis d’obtenir des gainssignificatifs en performance et en nombre de calculs requis, ainsi que de tester le package R GPareto dĂ©veloppĂ© dans le cadre de cette thĂšse

    Democratizing machine learning

    Get PDF
    Modelle des maschinellen Lernens sind zunehmend in der Gesellschaft verankert, oft in Form von automatisierten Entscheidungsprozessen. Ein wesentlicher Grund dafĂŒr ist die verbesserte ZugĂ€nglichkeit von Daten, aber auch von Toolkits fĂŒr maschinelles Lernen, die den Zugang zu Methoden des maschinellen Lernens fĂŒr Nicht-Experten ermöglichen. Diese Arbeit umfasst mehrere BeitrĂ€ge zur Demokratisierung des Zugangs zum maschinellem Lernen, mit dem Ziel, einem breiterem Publikum Zugang zu diesen Technologien zu er- möglichen. Die BeitrĂ€ge in diesem Manuskript stammen aus mehreren Bereichen innerhalb dieses weiten Gebiets. Ein großer Teil ist dem Bereich des automatisierten maschinellen Lernens (AutoML) und der Hyperparameter-Optimierung gewidmet, mit dem Ziel, die oft mĂŒhsame Aufgabe, ein optimales Vorhersagemodell fĂŒr einen gegebenen Datensatz zu finden, zu vereinfachen. Dieser Prozess besteht meist darin ein fĂŒr vom Benutzer vorgegebene Leistungsmetrik(en) optimales Modell zu finden. Oft kann dieser Prozess durch Lernen aus vorhergehenden Experimenten verbessert oder beschleunigt werden. In dieser Arbeit werden drei solcher Methoden vorgestellt, die entweder darauf abzielen, eine feste Menge möglicher Hyperparameterkonfigurationen zu erhalten, die wahrscheinlich gute Lösungen fĂŒr jeden neuen Datensatz enthalten, oder Eigenschaften der DatensĂ€tze zu nutzen, um neue Konfigurationen vorzuschlagen. DarĂŒber hinaus wird eine Sammlung solcher erforderlichen Metadaten zu den Experimenten vorgestellt, und es wird gezeigt, wie solche Metadaten fĂŒr die Entwicklung und als Testumgebung fĂŒr neue Hyperparameter- Optimierungsmethoden verwendet werden können. Die weite Verbreitung von ML-Modellen in vielen Bereichen der Gesellschaft erfordert gleichzeitig eine genauere Untersuchung der Art und Weise, wie aus Modellen abgeleitete automatisierte Entscheidungen die Gesellschaft formen, und ob sie möglicherweise Individuen oder einzelne Bevölkerungsgruppen benachteiligen. In dieser Arbeit wird daher ein AutoML-Tool vorgestellt, das es ermöglicht, solche Überlegungen in die Suche nach einem optimalen Modell miteinzubeziehen. Diese Forderung nach Fairness wirft gleichzeitig die Frage auf, ob die Fairness eines Modells zuverlĂ€ssig geschĂ€tzt werden kann, was in einem weiteren Beitrag in dieser Arbeit untersucht wird. Da der Zugang zu Methoden des maschinellen Lernens auch stark vom Zugang zu Software und Toolboxen abhĂ€ngt, sind mehrere BeitrĂ€ge in Form von Software Teil dieser Arbeit. Das R-Paket mlr3pipelines ermöglicht die Einbettung von Modellen in sogenan- nte Machine Learning Pipelines, die Vor- und Nachverarbeitungsschritte enthalten, die im maschinellen Lernen und AutoML hĂ€ufig benötigt werden. Das mlr3fairness R-Paket hingegen ermöglicht es dem Benutzer, Modelle auf potentielle Benachteiligung hin zu ĂŒber- prĂŒfen und diese durch verschiedene Techniken zu reduzieren. Eine dieser Techniken, multi-calibration wurde darĂŒberhinaus als seperate Software veröffentlicht.Machine learning artifacts are increasingly embedded in society, often in the form of automated decision-making processes. One major reason for this, along with methodological improvements, is the increasing accessibility of data but also machine learning toolkits that enable access to machine learning methodology for non-experts. The core focus of this thesis is exactly this – democratizing access to machine learning in order to enable a wider audience to benefit from its potential. Contributions in this manuscript stem from several different areas within this broader area. A major section is dedicated to the field of automated machine learning (AutoML) with the goal to abstract away the tedious task of obtaining an optimal predictive model for a given dataset. This process mostly consists of finding said optimal model, often through hyperparameter optimization, while the user in turn only selects the appropriate performance metric(s) and validates the resulting models. This process can be improved or sped up by learning from previous experiments. Three such methods one with the goal to obtain a fixed set of possible hyperparameter configurations that likely contain good solutions for any new dataset and two using dataset characteristics to propose new configurations are presented in this thesis. It furthermore presents a collection of required experiment metadata and how such meta-data can be used for the development and as a test bed for new hyperparameter optimization methods. The pervasion of models derived from ML in many aspects of society simultaneously calls for increased scrutiny with respect to how such models shape society and the eventual biases they exhibit. Therefore, this thesis presents an AutoML tool that allows incorporating fairness considerations into the search for an optimal model. This requirement for fairness simultaneously poses the question of whether we can reliably estimate a model’s fairness, which is studied in a further contribution in this thesis. Since access to machine learning methods also heavily depends on access to software and toolboxes, several contributions in the form of software are part of this thesis. The mlr3pipelines R package allows for embedding models in so-called machine learning pipelines that include pre- and postprocessing steps often required in machine learning and AutoML. The mlr3fairness R package on the other hand enables users to audit models for potential biases as well as reduce those biases through different debiasing techniques. One such technique, multi-calibration is published as a separate software package, mcboost

    Stochastic Algorithms in Riemannian Manifolds and Adaptive Networks

    Full text link
    The combination of adaptive network algorithms and stochastic geometric dynamics has the potential to make a large impact in distributed control and signal processing applications. However, both literatures contain fundamental unsolved problems. The thesis is thus in two main parts. In part I, we consider stochastic differential equations (SDEs) evolving in a matrix Lie group. To undertake any kind of statistical signal processing or control task in this setting requires the simulation of such geometric SDEs. This foundational issue has barely been addressed previously. Chapter 1 contains background and motivation. Chapter 2 develops numerical schemes for simulating SDEs that evolve in SO(n) and SE(n). We propose novel, reliable, efficient schemes based on diagonal Padé approximants, where each trajectory lies in the respective manifold. We prove first order convergence in mean uniform squared error using a new proof technique. Simulations for SDEs in SO(50) are provided. In part II, we study adaptive networks. These are collections of individual agents (nodes) that cooperate to solve estimation, detection, learning and adaptation problems in real time from streaming data, without a fusion center. We study general diffusion LMS algorithms - including real time consensus - for distributed MMSE parameter estimation. This choice is motivated by two major flaws in the literature. First, all analyses assume the regressors are white noise, whereas in practice serial correlation is pervasive. Dealing with it however is much harder than the white noise case. Secondly, since the algorithms operate in real time, we must consider realization-wise behavior. There are no such results. To remedy these flaws, we uncover the mixed time scale structure of the algorithms. We then perform a novel mixed time scale stochastic averaging analysis. Chapter 3 contains background and motivation. Realization-wise stability (chapter 4) and performance including network MSD, EMSE and realization-wise fluctuations (chapter 5) are then studied. We develop results in the difficult but realistic case of serial correlation. We observe that the popular ATC, CTA and real time consensus algorithms are remarkably similar in terms of stability and performance for small constant step sizes. Parts III and IV contain conclusions and future work
    corecore