102 research outputs found

    Risikofaktoren und Multifaktormodelle für den Deutschen Aktienmarkt (Risk Factors and Multi-Factor Models for the German Stock Market)

    Get PDF
    Der deutsche Aktienmarkt sah sich in den letzten 15 Jahren substantiellen Veränderungen gegenüber, welche unter anderem in eine zunehmende Internationalisierung und deutlich erhöhten Streubesitz mündeten. In der vorliegenden Arbeit untersuchen wir, inwieweit dies die aus klassischen Multifaktormodellen bekannten Risikofaktoren beeinflusste. Basierend auf den Renditen derCDAX-Unternehmen von Juli 1996 bis Juni 2011 dokumentieren wir vier wesentliche Ergebnisse. Erstens finden wir eine insignifikant (positive) Marktrisikoprämie, eine signifikant negative Größenprämie (Size Premium), eine signifikant positive Substanzprämie (Value Premium) und eine signifikant positive Momentumprämie (Momentum Premium). Zweitens zeigen sich alle vier Faktoren untereinander nur schwach bzw. negativ korreliert und teilweise mit internationalen Gegenstücken nur schwach korreliert. Drittens zeigt sich, dass Renditen von Aktienportfolios, sortiert nach Marktkapitalisierung und Buch-Marktwert-Verhältnis, durch ein Dreifaktorenmodell nach Fama French (1993) substantiell besser erklärt werden, als durch ein Einfaktormodell in Anlehnung an das klassische Capital Asset Pricing Model. Der zusätzliche Erklärungsbeitrag des Momentumfaktors in Anlehnung an Carhart (1997) ist hingegen marginal. Letztendlich argumentieren wir daher vor dem Hintergrund der bekannten Literatur und unserer Ergebnisse für eine länderspezifische Erweiterung des Capital Asset Pricing Models. -- For the last 15 years, the German stock market has been facing substantial changes that resulted in increasing internationalization and a higher free float. In this paper, we investigate to what extent these changes influenced the well-known risk factors of standard multi-factor models. Based on the returns of all stocks listed in the German composite index CDAX (all domestic companies of the Prime and General Standard of the Frankfurt Stock Exchange) from July 1996 to June 2011, we document four major results: First, we find an insignificant (positive) market risk premium, a significant negative size premium, a significant positive value premium and a significant positive momentum premium. Second, the correlation within all four risk factors is only weakly positive or even negative and with international counterparts only weak. Third, we find that returns of portfolios, sorted by market capitalization and book-to-market equity, are captured substantially better by multi-factor models by Fama/French (1993) or Carhart (1997) than by the one-factor model based on the standard Capital Asset Pricing Model. Finally, after comparing our findings for the last 15 years with the existing literature, we conclude for a country specific extension of the Capital Asset Pricing Model.CAPM,multi-factor models,asset pricing,asset pricing anomalies,anomalies,Fama French,Carhart,risk factors,value,size,momentum,Germany

    Developing an instrument to assess the endoscopic severity of ulcerative colitis : The Ulcerative Colitis Endoscopic Index of Severity (UCEIS)

    Get PDF
    Full list of Investigators is given at the end of the article.Background: Variability in endoscopic assessment necessitates rigorous investigation of descriptors for scoring severity of ulcerative colitis (UC). Objective: To evaluate variation in the overall endoscopic assessment of severity, the intra- and interindividual variation of descriptive terms and to create an Ulcerative Colitis Endoscopic Index of Severity which could be validated. Design: A two-phase study used a library of 670 video sigmoidoscopies from patients with Mayo Clinic scores 0-11, supplemented by 10 videos from five people without UC and five hospitalised patients with acute severe UC. In phase 1, each of 10 investigators viewed 16/24 videos to assess agreement on the Baron score with a central reader and agreed definitions of 10 endoscopic descriptors. In phase 2, each of 30 different investigators rated 25/60 different videos for the descriptors and assessed overall severity on a 0-100 visual analogue scale. κ Statistics tested inter- and intraobserver variability for each descriptor. A general linear mixed regression model based on logit link and β distribution of variance was used to predict overall endoscopic severity from descriptors. Results: There was 76% agreement for 'severe', but 27% agreement for 'normal' appearances between phase I investigators and the central reader. In phase 2, weighted κ values ranged from 0.34 to 0.65 and 0.30 to 0.45 within and between observers for the 10 descriptors. The final model incorporated vascular pattern, (normal/patchy/ complete obliteration) bleeding (none/mucosal/luminal mild/luminal moderate or severe), erosions and ulcers (none/erosions/superficial/deep), each with precise definitions, which explained 90% of the variance (pR2, Akaike Information Criterion) in the overall assessment of endoscopic severity, predictions varying from 4 to 93 on a 100-point scale (from normal to worst endoscopic severity). Conclusion: The Ulcerative Colitis Endoscopic Index of Severity accurately predicts overall assessment of endoscopic severity of UC. Validity and responsiveness need further testing before it can be applied as an outcome measure in clinical trials or clinical practice.publishersversionPeer reviewe

    The ELIXIR Human Copy Number Variations Community:building bioinformatics infrastructure for research

    Get PDF
    Copy number variations (CNVs) are major causative contributors both in the genesis of genetic diseases and human neoplasias. While 'High-Throughput' sequencing technologies are increasingly becoming the primary choice for genomic screening analysis, their ability to efficiently detect CNVs is still heterogeneous and remains to be developed. The aim of this white paper is to provide a guiding framework for the future contributions of ELIXIR's recently established h uman CNV Community, with implications beyond human disease diagnostics and population genomics. This white paper is the direct result of a strategy meeting that took place in September 2018 in Hinxton (UK) and involved representatives of 11 ELIXIR Nodes. The meeting led to the definition of priority objectives and tasks, to address a wide range of CNV-related challenges ranging from detection and interpretation to sharing and training. Here, we provide suggestions on how to align these tasks within the ELIXIR Platforms strategy, and on how to frame the activities of this new ELIXIR Community in the international context

    Expansion of the Human Phenotype Ontology (HPO) knowledge base and resources.

    Get PDF
    The Human Phenotype Ontology (HPO)-a standardized vocabulary of phenotypic abnormalities associated with 7000+ diseases-is used by thousands of researchers, clinicians, informaticians and electronic health record systems around the world. Its detailed descriptions of clinical abnormalities and computable disease definitions have made HPO the de facto standard for deep phenotyping in the field of rare disease. The HPO\u27s interoperability with other ontologies has enabled it to be used to improve diagnostic accuracy by incorporating model organism data. It also plays a key role in the popular Exomiser tool, which identifies potential disease-causing variants from whole-exome or whole-genome sequencing data. Since the HPO was first introduced in 2008, its users have become both more numerous and more diverse. To meet these emerging needs, the project has added new content, language translations, mappings and computational tooling, as well as integrations with external community data. The HPO continues to collaborate with clinical adopters to improve specific areas of the ontology and extend standardized disease descriptions. The newly redesigned HPO website (www.human-phenotype-ontology.org) simplifies browsing terms and exploring clinical features, diseases, and human genes

    The Human Phenotype Ontology in 2024: phenotypes around the world.

    Get PDF
    The Human Phenotype Ontology (HPO) is a widely used resource that comprehensively organizes and defines the phenotypic features of human disease, enabling computational inference and supporting genomic and phenotypic analyses through semantic similarity and machine learning algorithms. The HPO has widespread applications in clinical diagnostics and translational research, including genomic diagnostics, gene-disease discovery, and cohort analytics. In recent years, groups around the world have developed translations of the HPO from English to other languages, and the HPO browser has been internationalized, allowing users to view HPO term labels and in many cases synonyms and definitions in ten languages in addition to English. Since our last report, a total of 2239 new HPO terms and 49235 new HPO annotations were developed, many in collaboration with external groups in the fields of psychiatry, arthrogryposis, immunology and cardiology. The Medical Action Ontology (MAxO) is a new effort to model treatments and other measures taken for clinical management. Finally, the HPO consortium is contributing to efforts to integrate the HPO and the GA4GH Phenopacket Schema into electronic health records (EHRs) with the goal of more standardized and computable integration of rare disease data in EHRs

    The seeds of divergence: the economy of French North America, 1688 to 1760

    Get PDF
    Generally, Canada has been ignored in the literature on the colonial origins of divergence with most of the attention going to the United States. Late nineteenth century estimates of income per capita show that Canada was relatively poorer than the United States and that within Canada, the French and Catholic population of Quebec was considerably poorer. Was this gap long standing? Some evidence has been advanced for earlier periods, but it is quite limited and not well-suited for comparison with other societies. This thesis aims to contribute both to Canadian economic history and to comparative work on inequality across nations during the early modern period. With the use of novel prices and wages from Quebec—which was then the largest settlement in Canada and under French rule—a price index, a series of real wages and a measurement of Gross Domestic Product (GDP) are constructed. They are used to shed light both on the course of economic development until the French were defeated by the British in 1760 and on standards of living in that colony relative to the mother country, France, as well as the American colonies. The work is divided into three components. The first component relates to the construction of a price index. The absence of such an index has been a thorn in the side of Canadian historians as it has limited the ability of historians to obtain real values of wages, output and living standards. This index shows that prices did not follow any trend and remained at a stable level. However, there were episodes of wide swings—mostly due to wars and the monetary experiment of playing card money. The creation of this index lays the foundation of the next component. The second component constructs a standardized real wage series in the form of welfare ratios (a consumption basket divided by nominal wage rate multiplied by length of work year) to compare Canada with France, England and Colonial America. Two measures are derived. The first relies on a “bare bones” definition of consumption with a large share of land-intensive goods. This measure indicates that Canada was poorer than England and Colonial America and not appreciably richer than France. However, this measure overestimates the relative position of Canada to the Old World because of the strong presence of land-intensive goods. A second measure is created using a “respectable” definition of consumption in which the basket includes a larger share of manufactured goods and capital-intensive goods. This second basket better reflects differences in living standards since the abundance of land in Canada (and Colonial America) made it easy to achieve bare subsistence, but the scarcity of capital and skilled labor made the consumption of luxuries and manufactured goods (clothing, lighting, imported goods) highly expensive. With this measure, the advantage of New France over France evaporates and turns slightly negative. In comparison with Britain and Colonial America, the gap widens appreciably. This element is the most important for future research. By showing a reversal because of a shift to a different type of basket, it shows that Old World and New World comparisons are very sensitive to how we measure the cost of living. Furthermore, there are no sustained improvements in living standards over the period regardless of the measure used. Gaps in living standards observed later in the nineteenth century existed as far back as the seventeenth century. In a wider American perspective that includes the Spanish colonies, Canada fares better. The third component computes a new series for Gross Domestic Product (GDP). This is to avoid problems associated with using real wages in the form of welfare ratios which assume a constant labor supply. This assumption is hard to defend in the case of Colonial Canada as there were many signs of increasing industriousness during the eighteenth and nineteenth centuries. The GDP series suggest no long-run trend in living standards (from 1688 to circa 1765). The long peace era of 1713 to 1740 was marked by modest economic growth which offset a steady decline that had started in 1688, but by 1760 (as a result of constant warfare) living standards had sunk below their 1688 levels. These developments are accompanied by observations that suggest that other indicators of living standard declined. The flat-lining of incomes is accompanied by substantial increases in the amount of time worked, rising mortality and rising infant mortality. In addition, comparisons of incomes with the American colonies confirm the results obtained with wages— Canada was considerably poorer. At the end, a long conclusion is provides an exploratory discussion of why Canada would have diverged early on. In structural terms, it is argued that the French colony was plagued by the problem of a small population which prohibited the existence of scale effects. In combination with the fact that it was dispersed throughout the territory, the small population of New France limited the scope for specialization and economies of scale. However, this problem was in part created, and in part aggravated, by institutional factors like seigneurial tenure. The colonial origins of French America’s divergence from the rest of North America are thus partly institutional

    The Seeds of Divergence: The Economy of French North America, 1688 to 1760

    Full text link
    corecore