339 research outputs found
A Complete Enumeration and Classification of Two-Locus Disease Models
There are 512 two-locus, two-allele, two-phenotype, fully-penetrant disease
models. Using the permutation between two alleles, between two loci, and
between being affected and unaffected, one model can be considered to be
equivalent to another model under the corresponding permutation. These
permutations greatly reduce the number of two-locus models in the analysis of
complex diseases. This paper determines the number of non-redundant two-locus
models (which can be 102, 100, 96, 51, 50, or 48, depending on which
permutations are used, and depending on whether zero-locus and single-locus
models are excluded). Whenever possible, these non-redundant two-locus models
are classified by their property. Besides the familiar features of
multiplicative models (logical AND), heterogeneity models (logical OR), and
threshold models, new classifications are added or expanded: modifying-effect
models, logical XOR models, interference and negative interference models
(neither dominant nor recessive), conditionally dominant/recessive models,
missing lethal genotype models, and highly symmetric models. The following
aspects of two-locus models are studied: the marginal penetrance tables at both
loci, the expected joint identity-by-descent probabilities, and the correlation
between marginal identity-by-descent probabilities at the two loci. These
studies are useful for linkage analyses using single-locus models while the
underlying disease model is two-locus, and for correlation analyses using the
linkage signals at different locations obtained by a single-locus model.Comment: LaTeX, to be published in Human Heredit
Can GNSS reflectometry detect precipitation over oceans?
For the first time, a rain signature in Global Navigation Satellite System Reflectometry (GNSSâR) observations is demonstrated. Based on the argument that the forward quasiâspecular scattering relies upon surface gravity waves with lengths larger than several wavelengths of the reflected signal, a commonly made conclusion is that the scatterometric GNSSâR measurements are not sensitive to the surface smallâscale roughness generated by raindrops impinging on the ocean surface. On the contrary, this study presents an evidence that the bistatic radar cross section Ï0 derived from TechDemoSatâ1 data is reduced due to rain at weak winds, lower than â 6 m/s. The decrease is as large as â 0.7 dB at the wind speed of 3 m/s due to a precipitation of 0â2 mm/hr. The simulations based on the recently published scattering theory provide a plausible explanation for this phenomenon which potentially enables the GNSSâR technique to detect precipitation over oceans at low winds
Gap Filling in the Plant Kingdom---Trait Prediction Using Hierarchical Probabilistic Matrix Factorization
Plant traits are a key to understanding and predicting the adaptation of
ecosystems to environmental changes, which motivates the TRY project aiming at
constructing a global database for plant traits and becoming a standard
resource for the ecological community. Despite its unprecedented coverage, a
large percentage of missing data substantially constrains joint trait analysis.
Meanwhile, the trait data is characterized by the hierarchical phylogenetic
structure of the plant kingdom. While factorization based matrix completion
techniques have been widely used to address the missing data problem,
traditional matrix factorization methods are unable to leverage the
phylogenetic structure. We propose hierarchical probabilistic matrix
factorization (HPMF), which effectively uses hierarchical phylogenetic
information for trait prediction. We demonstrate HPMF's high accuracy,
effectiveness of incorporating hierarchical structure and ability to capture
trait correlation through experiments.Comment: Appears in Proceedings of the 29th International Conference on
Machine Learning (ICML 2012
Evaluating Impact of Rain Attenuation on Space-borne GNSS Reflectometry Wind Speeds
The novel space-borne Global Navigation Satellite System Reflectometry (GNSS-R) technique has recently shown promise in monitoring the ocean state and surface wind speed with high spatial coverage and unprecedented sampling rate. The L-band signals of GNSS are structurally able to provide a higher quality of observations from areas covered by dense clouds and under intense precipitation, compared to those signals at higher frequencies from conventional ocean scatterometers. As a result, studying the inner core of cyclones and improvement of severe weather forecasting and cyclone tracking have turned into the main objectives of GNSS-R satellite missions such as Cyclone Global Navigation Satellite System (CYGNSS). Nevertheless, the rain attenuation impact on GNSS-R wind speed products is not yet well documented. Evaluating the rain attenuation effects on this technique is significant since a small change in the GNSS-R can potentially cause a considerable bias in the resultant wind products at intense wind speeds. Based on both empirical evidence and theory, wind speed is inversely proportional to derived bistatic radar cross section with a natural logarithmic relation, which introduces high condition numbers (similar to ill-posed conditions) at the inversions to high wind speeds. This paper presents an evaluation of the rain signal attenuation impact on the bistatic radar cross section and the derived wind speed. This study is conducted simulating GNSS-R delay-Doppler maps at different rain rates and reflection geometries, considering that an empirical data analysis at extreme wind intensities and rain rates is impossible due to the insufficient number of observations from these severe conditions. Finally, the study demonstrates that at a wind speed of 30 m/s and incidence angle of 30°, rain at rates of 10, 15, and 20 mm/h might cause overestimation as large as â0.65 m/s (2%), 1.00 m/s (3%), and 1.3 m/s (4%), respectively, which are still smaller than the CYGNSS required uncertainty threshold. The simulations are conducted in a pessimistic condition (severe continuous rainfall below the freezing height and over the entire glistening zone) and the bias is expected to be smaller in size in real environments
A new theory of optimal inflation
Central banks like the Bank of England or the Bundesbank have highlighted recently that the supply of currency is achieved not by means of printing and spending but by means of credit. This clarification raises further issues. This article addresses the issue of seigniorage and optimal inflation.
So far approaches to seigniorage and optimal inflation are still based on the assumption of a currency which is printed and spend by a central authority. From this perspective central banksâ inflation targets and optimal inflation targets are at odds with those suggested by economic theory. The so-called Friedman-rule, the common core of optimal inflation theory, determines optimal inflation via the (opportunity) cost of producing currency. This basic approach is amended by âexternal effectsâ, e.g. the impact of monetary non-neutrality or wage rigidities and so on. However, even under consideration of external effects there remains a significant gap between actual inflation targets and optimal rates as suggested by theory.
The supply by means of credit, however, involves âcosts of productionâ which do not appear in Friedmanâs case: losses from borrower defaults. Incorporating expected losses into economic theory contributes significantly in aligning central banksâ optima with economic theory and provides a new theory of seigniorage for a credit currency
A new theory of seigniorage and optimal inflation
Central banks like the Bank of England or the Bundesbank have highlighted recently that the supply of currency is achieved not by means of printing and spending but by means of credit. This clarification raises further issues. This article addresses the issue of seigniorage and optimal inflation.
So far approaches to seigniorage and optimal inflation are still based on the assumption of a currency which is printed and spend by a central authority. From this perspective central banksâ inflation targets and optimal inflation targets are at odds with those suggested by economic theory. The so-called Friedman-rule, the common core of optimal inflation theory, determines optimal inflation via the (opportunity) cost of producing currency. This basic approach is amended by âexternal effectsâ, e.g. the impact of monetary non-neutrality or wage rigidities and so on. However, even under consideration of external effects there remains a significant gap between actual inflation targets and optimal rates as suggested by theory.
The supply by means of credit, however, involves âcosts of productionâ which do not appear in Friedmanâs case: losses from borrower defaults. Incorporating expected losses into economic theory contributes significantly in aligning central banksâ optima with economic theory and provides a new theory of seigniorage for a credit currency
The Friedman rule in todayâs perspective
Central banks like the Bank of England or the Bundesbank have highlighted recently that the supply of currency is achieved not by means of printing and spending but by means of credit. This clarification raises further issues. This article addresses the issue of seigniorage and optimal inflation.
So far approaches to seigniorage and optimal inflation are still based on the assumption of a currency which is printed and spend by a central authority. From this perspective central banksâ inflation targets and optimal inflation targets are at odds with those suggested by economic theory. The so-called Friedman-rule, the common core of optimal inflation theory, determines optimal inflation via the (opportunity) cost of producing currency. This basic approach is amended by âexternal effectsâ, e.g. the impact of monetary non-neutrality or wage rigidities and so on. However, even under consideration of external effects there remains a significant gap between actual inflation targets and optimal rates as suggested by theory.
The supply by means of credit, however, involves âcosts of productionâ which do not appear in Friedmanâs case: losses from borrower defaults. Incorporating expected losses into economic theory contributes significantly in aligning central banksâ optima with economic theory and provides a new theory of seigniorage for a credit currency
A new theory of seigniorage and optimal inflation
Central banks like the Bank of England or the Bundesbank have highlighted recently that the supply of currency is achieved not by means of printing and spending but by means of credit. This clarification raises further issues. This article addresses the issue of seigniorage and optimal inflation.
So far approaches to seigniorage and optimal inflation are still based on the assumption of a currency which is printed and spend by a central authority. From this perspective central banksâ inflation targets and optimal inflation targets are at odds with those suggested by economic theory. The so-called Friedman-rule, the common core of optimal inflation theory, determines optimal inflation via the (opportunity) cost of producing currency. This basic approach is amended by âexternal effectsâ, e.g. the impact of monetary non-neutrality or wage rigidities and so on. However, even under consideration of external effects there remains a significant gap between actual inflation targets and optimal rates as suggested by theory.
The supply by means of credit, however, involves âcosts of productionâ which do not appear in Friedmanâs case: losses from borrower defaults. Incorporating expected losses into economic theory contributes significantly in aligning central banksâ optima with economic theory and provides a new theory of seigniorage for a credit currency
- âŠ