7,052 research outputs found
A Note on False Positives and Power in G × E Modelling of Twin Data
The variance components models for gene–environment interaction proposed by Purcell in 2002 are widely used. In both the bivariate and the univariate parameterization of these models, the variance decomposition of trait T is a function of moderator M. We show that if M and T are correlated, and moderator M is correlated between twins as well, the univariate parameterization produces a considerable increase in false positive moderation effects. A simple extension of this univariate moderation model prevents this elevation of the false positive rate provided the covariance between M and T is itself not also subject to moderation. If the covariance between M and T varies as a function of M, then moderation effects observed in the univariate setting should be interpreted with care as these can have their origin in either moderation of the covariance between M and T or in moderation of the unique paths of T. We conclude that researchers should use the full bivariate moderation model to study the presence of moderation on the covariance between M and T. If such moderation can be ruled out, subsequent use of the extended univariate moderation model, as proposed in this paper, is recommended as this model is more powerful than the full bivariate moderation model
Strategies used as spectroscopy of financial markets reveal new stylized facts
We propose a new set of stylized facts quantifying the structure of financial
markets. The key idea is to study the combined structure of both investment
strategies and prices in order to open a qualitatively new level of
understanding of financial and economic markets. We study the detailed order
flow on the Shenzhen Stock Exchange of China for the whole year of 2003. This
enormous dataset allows us to compare (i) a closed national market (A-shares)
with an international market (B-shares), (ii) individuals and institutions and
(iii) real investors to random strategies with respect to timing that share
otherwise all other characteristics. We find that more trading results in
smaller net return due to trading frictions. We unveiled quantitative power
laws with non-trivial exponents, that quantify the deterioration of performance
with frequency and with holding period of the strategies used by investors.
Random strategies are found to perform much better than real ones, both for
winners and losers. Surprising large arbitrage opportunities exist, especially
when using zero-intelligence strategies. This is a diagnostic of possible
inefficiencies of these financial markets.Comment: 13 pages including 5 figures and 1 tabl
Semiparametric Multivariate Accelerated Failure Time Model with Generalized Estimating Equations
The semiparametric accelerated failure time model is not as widely used as
the Cox relative risk model mainly due to computational difficulties. Recent
developments in least squares estimation and induced smoothing estimating
equations provide promising tools to make the accelerate failure time models
more attractive in practice. For semiparametric multivariate accelerated
failure time models, we propose a generalized estimating equation approach to
account for the multivariate dependence through working correlation structures.
The marginal error distributions can be either identical as in sequential event
settings or different as in parallel event settings. Some regression
coefficients can be shared across margins as needed. The initial estimator is a
rank-based estimator with Gehan's weight, but obtained from an induced
smoothing approach with computation ease. The resulting estimator is consistent
and asymptotically normal, with a variance estimated through a multiplier
resampling method. In a simulation study, our estimator was up to three times
as efficient as the initial estimator, especially with stronger multivariate
dependence and heavier censoring percentage. Two real examples demonstrate the
utility of the proposed method
Recommended from our members
Error, reproducibility and sensitivity : a pipeline for data processing of Agilent oligonucleotide expression arrays
Background
Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples.
Results
We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2% of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log2 units ( 6% of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators.
Conclusions
This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells
Does self-monitoring reduce blood pressure? Meta-analysis with meta-regression of randomized controlled trials
Introduction. Self-monitoring of blood pressure (BP) is an increasingly common part of hypertension management. The objectives of this systematic review were to evaluate the systolic and diastolic BP reduction, and achievement of target BP, associated with self-monitoring.
Methods. MEDLINE, Embase, Cochrane database of systematic reviews, database of abstracts of clinical effectiveness, the health technology assessment database, the NHS economic evaluation database, and the TRIP database were searched for studies where the intervention included self-monitoring of BP and the outcome was change in office/ambulatory BP or proportion with controlled BP. Two reviewers independently extracted data. Meta-analysis using a random effects model was combined with meta-regression to investigate heterogeneity in effect sizes.
Results. A total of 25 eligible randomized controlled trials (RCTs) (27 comparisons) were identified. Office systolic BP (20 RCTs, 21 comparisons, 5,898 patients) and diastolic BP (23 RCTs, 25 comparisons, 6,038 patients) were significantly reduced in those who self-monitored compared to usual care (weighted mean difference (WMD) systolic −3.82 mmHg (95% confidence interval −5.61 to −2.03), diastolic −1.45 mmHg (−1.95 to −0.94)). Self-monitoring increased the chance of meeting office BP targets (12 RCTs, 13 comparisons, 2,260 patients, relative risk = 1.09 (1.02 to 1.16)). There was significant heterogeneity between studies for all three comparisons, which could be partially accounted for by the use of additional co-interventions.
Conclusion. Self-monitoring reduces blood pressure by a small but significant amount. Meta-regression could only account for part of the observed heterogeneity
Tactile acuity training for patients with chronic low back pain: a pilot randomised controlled trial
BACKGROUND: Chronic pain can disrupt the cortical representation of a painful body part. This disruption may play a role in maintaining the individual’s pain. Tactile acuity training has been used to normalise cortical representation and reduce pain in certain pain conditions. However, there is little evidence for the effectiveness of this intervention for chronic low back pain (CLBP). The primary aim of this study was to inform the development of a fully powered randomised controlled trial (RCT) by providing preliminary data on the effect of tactile acuity training on pain and function in individuals with CLBP. The secondary aim was to obtain qualitative feedback about the intervention. METHODS: In this mixed-methods pilot RCT 15 individuals were randomised to either an intervention (tactile acuity training) or a placebo group (sham tactile acuity training). All participants received 3 sessions of acuity training (intervention or sham) from a physiotherapist and were requested to undertake daily acuity home training facilitated by an informal carer (friend/relative). All participants also received usual care physiotherapy. The primary outcome measures were pain (0-100visual analogue scale (VAS)) and function (Roland Morris Disability Questionnaire (RMDQ)). Participants and their informal carers were invited to a focus group to provide feedback on the intervention. RESULTS: The placebo group improved by the greatest magnitude for both outcome measures, but there was no statistically significant difference (Mean difference (95%CI), p-value) between groups for change in pain (25.6 (-0.7 to 51.9), p = 0.056) or function (2.2 (-1.6 to 6.0), p = 0.237). Comparing the number of individuals achieving a minimally clinically significant improvement, the placebo group had better outcomes for pain with all participants achieving ≥30% improvement compared to only a third of the intervention group (6/6 vs. 3/9, p = 0.036). Qualitatively, participants reported that needing an informal carer was a considerable barrier to the home training component of the study. CONCLUSIONS: This pilot RCT found tactile acuity training to be no more effective than sham tactile acuity training for function and less effective for pain in individuals with CLBP. That the intervention could not be self-applied was a considerable barrier to its use. TRIAL REGISTRATION: ISRCTN: ISRCTN9811808
Precursors to social and communication difficulties in infants at-risk for autism: gaze following and attentional engagement
Whilst joint attention (JA) impairments in autism have been widely studied, little is known about the early development of gaze following, a precursor to establishing JA. We employed eye-tracking to record gaze following longitudinally in infants with and without a family history of autism spectrum disorder (ASD) at 7 and 13 months. No group difference was found between at-risk and low-risk infants in gaze following behaviour at either age. However, despite following gaze successfully at 13 months, at-risk infants with later emerging socio-communication difficulties (both those with ASD and atypical development at 36 months of age) allocated less attention to the congruent object compared to typically developing at-risk siblings and low-risk controls. The findings suggest that the subtle emergence of difficulties in JA in infancy may be related to ASD and other atypical outcomes
Dynamics of DNA replication loops reveal temporal control of lagging-strand synthesis
In all organisms, the protein machinery responsible for the replication of DNA, the replisome, is faced with a directionality problem. The antiparallel nature of duplex DNA permits the leading-strand polymerase to advance in a continuous fashion, but forces the lagging-strand polymerase to synthesize in the opposite direction. By extending RNA primers, the lagging-strand polymerase restarts at short intervals and produces Okazaki fragments. At least in prokaryotic systems, this directionality problem is solved by the formation of a loop in the lagging strand of the replication fork to reorient the lagging-strand DNA polymerase so that it advances in parallel with the leading-strand polymerase. The replication loop grows and shrinks during each cycle of Okazaki fragment synthesis. Here we use single-molecule techniques to visualize, in real time, the formation and release of replication loops by individual replisomes of bacteriophage T7 supporting coordinated DNA replication. Analysis of the distributions of loop sizes and lag times between loops reveals that initiation of primer synthesis and the completion of an Okazaki fragment each serve as a trigger for loop release. The presence of two triggers may represent a fail-safe mechanism ensuring the timely reset of the replisome after the synthesis of every Okazaki fragment.
- …