332 research outputs found

    A Complete Enumeration and Classification of Two-Locus Disease Models

    Full text link
    There are 512 two-locus, two-allele, two-phenotype, fully-penetrant disease models. Using the permutation between two alleles, between two loci, and between being affected and unaffected, one model can be considered to be equivalent to another model under the corresponding permutation. These permutations greatly reduce the number of two-locus models in the analysis of complex diseases. This paper determines the number of non-redundant two-locus models (which can be 102, 100, 96, 51, 50, or 48, depending on which permutations are used, and depending on whether zero-locus and single-locus models are excluded). Whenever possible, these non-redundant two-locus models are classified by their property. Besides the familiar features of multiplicative models (logical AND), heterogeneity models (logical OR), and threshold models, new classifications are added or expanded: modifying-effect models, logical XOR models, interference and negative interference models (neither dominant nor recessive), conditionally dominant/recessive models, missing lethal genotype models, and highly symmetric models. The following aspects of two-locus models are studied: the marginal penetrance tables at both loci, the expected joint identity-by-descent probabilities, and the correlation between marginal identity-by-descent probabilities at the two loci. These studies are useful for linkage analyses using single-locus models while the underlying disease model is two-locus, and for correlation analyses using the linkage signals at different locations obtained by a single-locus model.Comment: LaTeX, to be published in Human Heredit

    Can GNSS reflectometry detect precipitation over oceans?

    Get PDF
    For the first time, a rain signature in Global Navigation Satellite System Reflectometry (GNSS‐R) observations is demonstrated. Based on the argument that the forward quasi‐specular scattering relies upon surface gravity waves with lengths larger than several wavelengths of the reflected signal, a commonly made conclusion is that the scatterometric GNSS‐R measurements are not sensitive to the surface small‐scale roughness generated by raindrops impinging on the ocean surface. On the contrary, this study presents an evidence that the bistatic radar cross section σ0 derived from TechDemoSat‐1 data is reduced due to rain at weak winds, lower than ≈ 6 m/s. The decrease is as large as ≈ 0.7 dB at the wind speed of 3 m/s due to a precipitation of 0–2 mm/hr. The simulations based on the recently published scattering theory provide a plausible explanation for this phenomenon which potentially enables the GNSS‐R technique to detect precipitation over oceans at low winds

    Gap Filling in the Plant Kingdom---Trait Prediction Using Hierarchical Probabilistic Matrix Factorization

    Full text link
    Plant traits are a key to understanding and predicting the adaptation of ecosystems to environmental changes, which motivates the TRY project aiming at constructing a global database for plant traits and becoming a standard resource for the ecological community. Despite its unprecedented coverage, a large percentage of missing data substantially constrains joint trait analysis. Meanwhile, the trait data is characterized by the hierarchical phylogenetic structure of the plant kingdom. While factorization based matrix completion techniques have been widely used to address the missing data problem, traditional matrix factorization methods are unable to leverage the phylogenetic structure. We propose hierarchical probabilistic matrix factorization (HPMF), which effectively uses hierarchical phylogenetic information for trait prediction. We demonstrate HPMF's high accuracy, effectiveness of incorporating hierarchical structure and ability to capture trait correlation through experiments.Comment: Appears in Proceedings of the 29th International Conference on Machine Learning (ICML 2012

    Evaluating Impact of Rain Attenuation on Space-borne GNSS Reflectometry Wind Speeds

    Get PDF
    The novel space-borne Global Navigation Satellite System Reflectometry (GNSS-R) technique has recently shown promise in monitoring the ocean state and surface wind speed with high spatial coverage and unprecedented sampling rate. The L-band signals of GNSS are structurally able to provide a higher quality of observations from areas covered by dense clouds and under intense precipitation, compared to those signals at higher frequencies from conventional ocean scatterometers. As a result, studying the inner core of cyclones and improvement of severe weather forecasting and cyclone tracking have turned into the main objectives of GNSS-R satellite missions such as Cyclone Global Navigation Satellite System (CYGNSS). Nevertheless, the rain attenuation impact on GNSS-R wind speed products is not yet well documented. Evaluating the rain attenuation effects on this technique is significant since a small change in the GNSS-R can potentially cause a considerable bias in the resultant wind products at intense wind speeds. Based on both empirical evidence and theory, wind speed is inversely proportional to derived bistatic radar cross section with a natural logarithmic relation, which introduces high condition numbers (similar to ill-posed conditions) at the inversions to high wind speeds. This paper presents an evaluation of the rain signal attenuation impact on the bistatic radar cross section and the derived wind speed. This study is conducted simulating GNSS-R delay-Doppler maps at different rain rates and reflection geometries, considering that an empirical data analysis at extreme wind intensities and rain rates is impossible due to the insufficient number of observations from these severe conditions. Finally, the study demonstrates that at a wind speed of 30 m/s and incidence angle of 30°, rain at rates of 10, 15, and 20 mm/h might cause overestimation as large as ≈0.65 m/s (2%), 1.00 m/s (3%), and 1.3 m/s (4%), respectively, which are still smaller than the CYGNSS required uncertainty threshold. The simulations are conducted in a pessimistic condition (severe continuous rainfall below the freezing height and over the entire glistening zone) and the bias is expected to be smaller in size in real environments

    A new theory of optimal inflation

    Get PDF
    Central banks like the Bank of England or the Bundesbank have highlighted recently that the supply of currency is achieved not by means of printing and spending but by means of credit. This clarification raises further issues. This article addresses the issue of seigniorage and optimal inflation. So far approaches to seigniorage and optimal inflation are still based on the assumption of a currency which is printed and spend by a central authority. From this perspective central banks’ inflation targets and optimal inflation targets are at odds with those suggested by economic theory. The so-called Friedman-rule, the common core of optimal inflation theory, determines optimal inflation via the (opportunity) cost of producing currency. This basic approach is amended by “external effects”, e.g. the impact of monetary non-neutrality or wage rigidities and so on. However, even under consideration of external effects there remains a significant gap between actual inflation targets and optimal rates as suggested by theory. The supply by means of credit, however, involves “costs of production” which do not appear in Friedman’s case: losses from borrower defaults. Incorporating expected losses into economic theory contributes significantly in aligning central banks’ optima with economic theory and provides a new theory of seigniorage for a credit currency

    A new theory of seigniorage and optimal inflation

    Get PDF
    Central banks like the Bank of England or the Bundesbank have highlighted recently that the supply of currency is achieved not by means of printing and spending but by means of credit. This clarification raises further issues. This article addresses the issue of seigniorage and optimal inflation. So far approaches to seigniorage and optimal inflation are still based on the assumption of a currency which is printed and spend by a central authority. From this perspective central banks’ inflation targets and optimal inflation targets are at odds with those suggested by economic theory. The so-called Friedman-rule, the common core of optimal inflation theory, determines optimal inflation via the (opportunity) cost of producing currency. This basic approach is amended by “external effects”, e.g. the impact of monetary non-neutrality or wage rigidities and so on. However, even under consideration of external effects there remains a significant gap between actual inflation targets and optimal rates as suggested by theory. The supply by means of credit, however, involves “costs of production” which do not appear in Friedman’s case: losses from borrower defaults. Incorporating expected losses into economic theory contributes significantly in aligning central banks’ optima with economic theory and provides a new theory of seigniorage for a credit currency

    The Friedman rule in today’s perspective

    Get PDF
    Central banks like the Bank of England or the Bundesbank have highlighted recently that the supply of currency is achieved not by means of printing and spending but by means of credit. This clarification raises further issues. This article addresses the issue of seigniorage and optimal inflation. So far approaches to seigniorage and optimal inflation are still based on the assumption of a currency which is printed and spend by a central authority. From this perspective central banks’ inflation targets and optimal inflation targets are at odds with those suggested by economic theory. The so-called Friedman-rule, the common core of optimal inflation theory, determines optimal inflation via the (opportunity) cost of producing currency. This basic approach is amended by “external effects”, e.g. the impact of monetary non-neutrality or wage rigidities and so on. However, even under consideration of external effects there remains a significant gap between actual inflation targets and optimal rates as suggested by theory. The supply by means of credit, however, involves “costs of production” which do not appear in Friedman’s case: losses from borrower defaults. Incorporating expected losses into economic theory contributes significantly in aligning central banks’ optima with economic theory and provides a new theory of seigniorage for a credit currency

    A new theory of seigniorage and optimal inflation

    Get PDF
    Central banks like the Bank of England or the Bundesbank have highlighted recently that the supply of currency is achieved not by means of printing and spending but by means of credit. This clarification raises further issues. This article addresses the issue of seigniorage and optimal inflation. So far approaches to seigniorage and optimal inflation are still based on the assumption of a currency which is printed and spend by a central authority. From this perspective central banks’ inflation targets and optimal inflation targets are at odds with those suggested by economic theory. The so-called Friedman-rule, the common core of optimal inflation theory, determines optimal inflation via the (opportunity) cost of producing currency. This basic approach is amended by “external effects”, e.g. the impact of monetary non-neutrality or wage rigidities and so on. However, even under consideration of external effects there remains a significant gap between actual inflation targets and optimal rates as suggested by theory. The supply by means of credit, however, involves “costs of production” which do not appear in Friedman’s case: losses from borrower defaults. Incorporating expected losses into economic theory contributes significantly in aligning central banks’ optima with economic theory and provides a new theory of seigniorage for a credit currency
    • 

    corecore