219 research outputs found

    A discussion of data enhancement and optimization techniques for a fund of hedge funds portfolio

    Get PDF
    Ziel dieser Arbeit ist es, einen Überblick über die verschiedenen Techniken zur Datenanreicherung und Optimierung im Falle eines Fund of Hedge Funds Portfolios darzustellen, zu diskutieren und anhand von Experimenten zu illustrieren. Besonderes Augenmerk liegt dabei auch auf der Bewertung des Zusammenspiels der verschiedenen Datenanreicherungs- und Optimierungs-techniken. Erste Bausteine für ein integriertes computergestütztes Anwendungstool werden bereitgestellt und dokumentiert. Zudem werden Ideen für weitere Entwicklungen und Forschung vorgestellt. Zwei wesentliche Punkte unterscheiden diese Arbeit von ähnlichen, nämlich dass sie hauptsächlich auf Fund Level arbeitet und dass sie den gesamten Prozess, beginnend mit der Datenaufbereitung, über die Optimierung bis zur sachgerechten Evaluierung der Ergebnisse behandelt. Im ersten Teil wird das Thema im Kontext der Finanzwirtschaft verortet, der Begriff Hedge Fund definiert und die Relevanz der Aufgabenstellung erörtert. Neben dem schnellen Wachstum der Hedge Fund Industrie ist besonders das zunehmende Interesse von institutionellen Investoren ein wichtiger Grund quantitative, auf wissenschaftlichen Erkenntnissen aufbauende Methoden zur Unterstützung der Entscheidungsfindung bei der Auswahl von Hedge Funds bereitzustellen. Der zweite Teil beschäftigt sich mit der Frage der Datenaufbereitung. Generell gilt, dass der Output eines Optimierungs Algorithmus nur so gut sein kann, wie die Qualität der Input Daten mit denen er gefüttert wird. Dies trifft insbesondere auch auf den Fall von Hedge Funds zu, da die Datenlage hier als eher schwierig zu bezeichnen ist: Es werden nur monatliche Renditezahlen zur Verfügung gestellt und Informationen über Risiko Exposures sind nur schwer zu erhalten. Nachdem ein kurzer Literaturüberblick über die Hedge Fund spezifischen Datenprobleme und Verzerrungen gegeben wird werden die verwendeten Datenbanken anhand von einigen deskriptiven Merkmalen beschrieben. Besonderes Augenmerkt wird bei der Datenaufbereitung der hohen Autokorrelation in den Hedge Fund Renditen und dem Auffüllen kurzer Performance Zeitreihen gewidmet. Ersteres weil eine hohe Autokorrelation fundamentalen Prinzipien der modernen Finanzwirtschaft widerspricht, zweiteres weil es zu einer besseren Einschätzung des Risikoprofils der betrachteten Hedge Funds führt. Zum Zwecke der Datenauffüllung werden im Einzelnen Ansätze über Faktormodelle und Clusteranalyse besprochen. Nach einer Übersicht über die in der Literatur vorgeschlagenen Risikofaktoren wird ein zentraler Gesichtspunkt, nämlich ist die Modellierung von nichtlinearen Zusammenhängen z.B. über Optionsstrukturen, genauer beleuchtet. Wichtige eigene Beiträge in diesem Kapitel sind die ökonomische Interpretation und Motivation des favorisierten Optionsstrukturmodells sowie Vorschläge und erste Experimente zur automatischen Modellselektion und zur Einbindung qualitativer Daten via Clusteranalyse. Der dritte Teil ist der Optimierung gewidmet. Die Hauptherausforderung ergibt sich aus der Tatsache, dass die Renditen von Hedge Fund Investments meist nicht normalverteilt sind. Da die traditionellen Konzepte der Finanzwirtschaft aber genau auf der Annahme von normalverteilten Renditen aufbauen, müssen alternative Konzepte angewandt werden. Nach einem kurzen Überblick über die klassische Mean-Variance Optimierung und Möglichkeiten robustere Ergebnisse zu bekommen, werden im Wesentlichen zwei Arten vorgestellt wie mit nicht normalverteilten Renditen umgegangen werden kann: parametrische Ansätze, die die höheren Momente (Schiefe und Kurtosis) der Verteilung berücksichtigen und nichtparametrische Ansätze, die mit historischen oder simulierten Szenarien und den sich daraus ergebenden diskreten Verteilungen arbeiten. Die Präferenzen des Investors können dabei über ein Dispersions- oder ein Quantilsmaß oder einer Kombination aus beidem erfasst werden. Danach werden Überlegungen angestellt wie einfache lineare und komplexere logische Nebenbedingungen eingesetzt und wie die vorgestellten Konzepte integriert werden können, speziell welche Datenaufbereitungstechniken mit welchen Optimierungsverfahren zusammenpassen. Im letzten Kapitel von Teil drei werden aufwendige Optimierungsexperimente durchgeführt und die neu gewonnen Erkenntnisse interpretiert. Die zentralen Erkenntnisse sind, dass die Wahl des Risikomaßes kaum Einfluss auf das letztinstanzliches Bewertungskriterium, die risikoadjustierte Out-Of-Sample Performance, hat und dass das Auffüllen von kurzen Performance Zeitreihen das Risiko Out-Of-Sample signifikant verbessert. Abschließend werden die Ergebnisse zusammengefasst und ein Ausblick auf zukünftige Forschungsarbeit gegeben.The aim of this thesis is to provide an overview and brief discussion, including some experiments, of techniques for data enhancement and optimization techniques for a fund of hedge funds. Special emphasis is placed on the interaction of the different data enhancement and optimization techniques. First building blocks for a computer based asset allocation tool are provided and documented. In addition it provides some ideas about future development and research. The two main points that distinguish this thesis from papers that treat a similar theme are that it operates on individual fund level and that it covers the whole process beginning with questions of data enhancement and parameter estimation up to proper evaluation of the outcomes. In the first chapter the theme is put in a broader context of finance, the term “hedge fund” gets defined and the relevance of the problem is reasoned. Besides the rapid growth rates in hedge fund industry the fact that more and more institutional investors invest in hedge funds is an important reason to provide decision support methods based on quantitative models and scientific findings. The second chapter deals with data enhancement. In general the proverb “garbage in – garbage out” holds true for every optimization algorithm, but it is especially true in the case of hedge funds as the data situation is very difficult in this field: only monthly data is provided and there is only little information about risk exposures. After a short literature overview about hedge fund specific data problems and biases descriptive statistics are provided for the two databases used in this thesis. With the data enhancement special emphasis is put on the high autocorrelation in hedge fund returns and on filling up track records of funds that are alive for a short time. The former because high autocorrelation is contradictory to fundamental principles of modern finance, the latter because it leads to a better understanding of a funds risk profile. For the purpose of filling up track records, factor model approaches and the use of cluster analysis are proposed. After a short literature overview about the risk factors considered in literature, the modeling of non linear dependencies, for example via option structures, is discussed on a broader basis as this topic is central in this thesis. Important own contributions in this context are the motivation and economic interpretation of the favored option structure model and some first experiments on automatic model selection and on integrating qualitative data via cluster analysis. The third chapter talks about optimization. The main challenge is the fact that hedge fund returns are not normally distributed. But as traditional concepts are based exactly on the assumption of normally distributed returns alternative concepts have to be used. After a short overview of classical mean-variance optimization and possibilities to get more robust outcomes, essentially two alternative concepts are introduced: parametrical approaches, that take higher moments (skewness and kurtosis) into account, and non parametrical approaches, that work with historical or simulated scenarios and with the discrete distributions resulting of these scenarios. With the second approach the preferences of an investor can be captured via a dispersion- or a quantile measure, or a combination of both. Then, different ways how linear and more complex logical constraints can be used are considered, and procedures how to integrate the concepts presented are discussed, especially which data enhancement and which optimization technique may fit together. In the last part of chapter 3 extensive optimization experiments are conducted and the outcome interpreted. The central findings are that the choice of the risk measure has no significant impact on the out of sample performance, which is the ultimate evaluation criterion; Filling up short track records on the other hand significantly improves the out of sample risk. Finally the findings are summarized and an outlook for future research is given

    Fast Non-Rigid Radiance Fields from Monocularized Data

    Full text link
    The reconstruction and novel view synthesis of dynamic scenes recently gained increased attention. As reconstruction from large-scale multi-view data involves immense memory and computational requirements, recent benchmark datasets provide collections of single monocular views per timestamp sampled from multiple (virtual) cameras. We refer to this form of inputs as "monocularized" data. Existing work shows impressive results for synthetic setups and forward-facing real-world data, but is often limited in the training speed and angular range for generating novel views. This paper addresses these limitations and proposes a new method for full 360{\deg} inward-facing novel view synthesis of non-rigidly deforming scenes. At the core of our method are: 1) An efficient deformation module that decouples the processing of spatial and temporal information for accelerated training and inference; and 2) A static module representing the canonical scene as a fast hash-encoded neural radiance field. In addition to existing synthetic monocularized data, we systematically analyze the performance on real-world inward-facing scenes using a newly recorded challenging dataset sampled from a synchronized large-scale multi-view rig. In both cases, our method is significantly faster than previous methods, converging in less than 7 minutes and achieving real-time framerates at 1K resolution, while obtaining a higher visual accuracy for generated novel views. Our source code and data is available at our project page https://graphics.tu-bs.de/publications/kappel2022fast.Comment: 18 pages, 14 figures; project page: https://graphics.tu-bs.de/publications/kappel2022fas

    Decision Making by a Neuromorphic Network of Volatile Resistive Switching Memories

    Full text link
    The necessity of having an electronic device working in relevant biological time scales with a small footprint boosted the research of a new class of emerging memories. Ag-based volatile resistive switching memories (RRAMs) feature a spontaneous change of device conductance with a similarity to biological mechanisms. They rely on the formation and self-disruption of a metallic conductive filament through an oxide layer, with a retention time ranging from a few milliseconds to several seconds, greatly tunable according to the maximum current which is flowing through the device. Here we prove a neuromorphic system based on volatile-RRAMs able to mimic the principles of biological decision-making behavior and tackle the Two-Alternative Forced Choice problem, where a subject is asked to make a choice between two possible alternatives not relying on a precise knowledge of the problem, rather on noisy perceptions

    Block-local learning with probabilistic latent representations

    Full text link
    The ubiquitous backpropagation algorithm requires sequential updates across blocks of a network, introducing a locking problem. Moreover, backpropagation relies on the transpose of weight matrices to calculate updates, introducing a weight transport problem across blocks. Both these issues prevent efficient parallelisation and horizontal scaling of models across devices. We propose a new method that introduces a twin network that propagates information backwards from the targets to the input to provide auxiliary local losses. Forward and backward propagation can work in parallel and with different sets of weights, addressing the problems of weight transport and locking. Our approach derives from a statistical interpretation of end-to-end training which treats activations of network layers as parameters of probability distributions. The resulting learning framework uses these parameters locally to assess the matching between forward and backward information. Error backpropagation is then performed locally within each block, leading to `block-local' learning. Several previously proposed alternatives to error backpropagation emerge as special cases of our model. We present results on various tasks and architectures, including transformers, demonstrating state-of-the-art performance using block-local learning. These results provide a new principled framework to train very large networks in a distributed setting and can also be applied in neuromorphic systems

    Depth Augmented Omnidirectional Stereo for 6-DoF VR Photography

    Get PDF
    We present an end-to-end pipeline that enables head-motion parallax for omnidirectional stereo (ODS) panoramas. Based on an ODS panorama containing a left and right eye view, our method estimates dense horizontal disparity fields between the stereo image pair. From this, we calculate a depth augmented stereo panorama (DASP) by explicitly reconstructing the scene geometry from the viewing circle corresponding to the ODS representation. The generated DASP representation supports motion parallax within the ODS viewing circle. Our approach operates directly on existing ODS panoramas. The experiments indicate the robustness and versatility of our approach on multiple real-world ODS panoramas.This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 66599

    Association between sexually transmitted disease and church membership:a retrospective cohort study of two Danish religious minorities

    Get PDF
    OBJECTIVES: Studies comprising Danish Seventh-day Adventists (SDAs) and Danish Baptists found that members have a lower risk of chronic diseases including cancer. Explanations have pointed to differences in lifestyle, but detailed aetiology has only been sparsely examined. Our objective was to investigate the incidence of sexually transmitted diseases (STDs) among Danish SDAs and Baptists as a proxy for cancers related to sexual behaviour. METHODS: We followed the Danish Cohort of Religious Societies from 1977 to 2009, and linked it with national registers of all inpatient and outpatient care contacts using the National Patient Register. We compared the incidence of syphilis, gonorrhoea and chlamydia among members of the cohort with the general population. RESULTS: The cohort comprised 3119 SDA females, 1856 SDA males, 2056 Baptist females and 1467 Baptist males. For the entire cohort, we expected a total of 32.4 events of STD, and observed only 9. Female SDAs and Baptists aged 20–39 years had significant lower incidence of chlamydia (both p<0.001). Male SDAs and Baptists aged 20–39 years also had significant lower incidence of chlamydia (p<0.01 and p<0.05, respectively). No SDA members were diagnosed with gonorrhoea, when 3.4 events were expected, which, according to Hanley's ‘rule of three’, is a significant difference. No SDA or Baptist was diagnosed with syphilis. CONCLUSIONS: The cohort shows significant lower incidence of STD, most likely including human papillomavirus, which may partly explain the lower incidence of cancers of the cervix, rectum, anus, head and neck

    In-situ quantification of manufacturing-induced strains in fiber metal laminates with strain gages

    Get PDF
    The predominant use of FBG sensors to characterize the residual stress state in composite materials to date does not permit absolute strain measurements. The reason for this is the loss of the connection between the sensor and laminate during phase transitions of the resin. Thus, points of significant changes in the measurement signal (e.g. bonding temperature) need to be used for the residual stress evaluation. For fiber metal laminates (FML) however, strain gages applied to the metal layer allow absolute strain measurements since the metal behaves purely elastic over the entire manufacturing process. Hence, residual stresses in the metal layer of an FML are quantified directly. Despite the sensors being applied to the metal layer, it is shown that the cure state of the resin can still be analyzed by changes in the coefficient of thermal expansion. Thus, the effects of different modifications to the cure cycle are assessed in terms of residual stress reduction. It is shown that assuming the bonding temperature to be equal to the stress-free temperature results in a conservative estimation of the residual stress state. The strain gage signal is shown to be in good agreement with FBG sensor data during a combined experiment
    corecore