288 research outputs found

    Nuclear receptor REVERBα is a state-dependent regulator of liver energy metabolism

    Get PDF
    The nuclear receptor REVERBα is a core component of the circadian clock and proposed to be a dominant regulator of hepatic lipid metabolism. Using antibody-independent ChIP-sequencing of REVERBα in mouse liver, we reveal a high-confidence cistrome and define direct target genes. REVERBα-binding sites are highly enriched for consensus RORE or RevDR2 motifs and overlap with corepressor complex binding. We find no evidence for transcription factor tethering and DNA-binding domain-independent action. Moreover, hepatocyte-specific deletion of Reverbα drives only modest physiological and transcriptional dysregulation, with derepressed target gene enrichment limited to circadian processes. Thus, contrary to previous reports, hepatic REVERBα does not repress lipogenesis under basal conditions. REVERBα control of a more extensive transcriptional program is only revealed under conditions of metabolic perturbation (including mistimed feeding, which is a feature of the global Reverbα -/- mouse). Repressive action of REVERBα in the liver therefore serves to buffer against metabolic challenge, rather than drive basal rhythmicity in metabolic activity

    A frequentist framework of inductive reasoning

    Full text link
    Reacting against the limitation of statistics to decision procedures, R. A. Fisher proposed for inductive reasoning the use of the fiducial distribution, a parameter-space distribution of epistemological probability transferred directly from limiting relative frequencies rather than computed according to the Bayes update rule. The proposal is developed as follows using the confidence measure of a scalar parameter of interest. (With the restriction to one-dimensional parameter space, a confidence measure is essentially a fiducial probability distribution free of complications involving ancillary statistics.) A betting game establishes a sense in which confidence measures are the only reliable inferential probability distributions. The equality between the probabilities encoded in a confidence measure and the coverage rates of the corresponding confidence intervals ensures that the measure's rule for assigning confidence levels to hypotheses is uniquely minimax in the game. Although a confidence measure can be computed without any prior distribution, previous knowledge can be incorporated into confidence-based reasoning. To adjust a p-value or confidence interval for prior information, the confidence measure from the observed data can be combined with one or more independent confidence measures representing previous agent opinion. (The former confidence measure may correspond to a posterior distribution with frequentist matching of coverage probabilities.) The representation of subjective knowledge in terms of confidence measures rather than prior probability distributions preserves approximate frequentist validity.Comment: major revisio

    Appointment time: Disability and neoliberal workfare temporalities

    Get PDF
    My primary interest in this article is to reveal the complexity of neoliberal temporalities on the lives of disabled people forced to participate in workfare regimes to maintain access to social security measures and programming. Through drawing upon some of the contemporary debates arising within the social study of time, this article explicates what Jessop refers to as the sovereignty of time that has emerged with the global adoption of neoliberal workfare regimes. It is argued that the central role of temporality within the globalizing project of neoliberal workfare and the positioning of disability within these global macro-structural processes requires the sociological imagination to return to both time as a theme and time as a methodology

    Reproducibility of preclinical animal research improves with heterogeneity of study samples

    Get PDF
    Single-laboratory studies conducted under highly standardized conditions are the gold standard in preclinical animal research. Using simulations based on 440 preclinical studies across 13 different interventions in animal models of stroke, myocardial infarction, and breast cancer, we compared the accuracy of effect size estimates between single-laboratory and multi-laboratory study designs. Single-laboratory studies generally failed to predict effect size accurately, and larger sample sizes rendered effect size estimates even less accurate. By contrast, multi-laboratory designs including as few as 2 to 4 laboratories increased coverage probability by up to 42 percentage points without a need for larger sample sizes. These findings demonstrate that within-study standardization is a major cause of poor reproducibility. More representative study samples are required to improve the external validity and reproducibility of preclinical animal research and to prevent wasting animals and resources for inconclusive research

    Overdiagnosis and overtreatment of breast cancer: Progression of ductal carcinoma in situ: the pathological perspective

    Get PDF
    Ductal carcinoma in situ (DCIS) is encountered much more frequently in the screening population compared to the symptomatic setting. The behaviour of DCIS is highly variable and this presents difficulties in choosing appropriate treatment strategies for individual cases. This review discusses the current data on the frequency and rate of progression of DCIS, the value and limitations of clinicopathological and biological variables in predicting disease behaviour and suggests strategies to develop more robust means of predicting progression of DCIS

    What Do Computer Scientists Tweet? Analyzing the Link-Sharing Practice on Twitter

    Get PDF
    Twitter communication has permeated every sphere of society. To highlight and share small pieces of information with possibly vast audiences or small circles of the interested has some value in almost any aspect of social life. But what is the value exactly for a scientific field? We perform a comprehensive study of computer scientists using Twitter and their tweeting behavior concerning the sharing of web links. Discerning the domains, hosts and individual web pages being tweeted and the differences between computer scientists and a Twitter sample enables us to look in depth at the Twitter-based information sharing practices of a scientific community. Additionally, we aim at providing a deeper understanding of the role and impact of altmetrics in computer science and give a glance at the publications mentioned on Twitter that are most relevant for the computer science community. Our results show a link sharing culture that concentrates more heavily on public and professional quality information than the Twitter sample does. The results also show a broad variety in linked sources and especially in linked publications with some publications clearly related to community-specific interests of computer scientists, while others with a strong relation to attention mechanisms in social media. This refers to the observation that Twitter is a hybrid form of social media between an information service and a social network service. Overall the computer scientists’ style of usage seems to be more on the information-oriented side and to some degree also on professional usage. Therefore, altmetrics are of considerable use in analyzing computer science

    Imputation strategies for missing binary outcomes in cluster randomized trials

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Attrition, which leads to missing data, is a common problem in cluster randomized trials (CRTs), where groups of patients rather than individuals are randomized. Standard multiple imputation (MI) strategies may not be appropriate to impute missing data from CRTs since they assume independent data. In this paper, under the assumption of missing completely at random and covariate dependent missing, we compared six MI strategies which account for the intra-cluster correlation for missing binary outcomes in CRTs with the standard imputation strategies and complete case analysis approach using a simulation study.</p> <p>Method</p> <p>We considered three within-cluster and three across-cluster MI strategies for missing binary outcomes in CRTs. The three within-cluster MI strategies are logistic regression method, propensity score method, and Markov chain Monte Carlo (MCMC) method, which apply standard MI strategies within each cluster. The three across-cluster MI strategies are propensity score method, random-effects (RE) logistic regression approach, and logistic regression with cluster as a fixed effect. Based on the community hypertension assessment trial (CHAT) which has complete data, we designed a simulation study to investigate the performance of above MI strategies.</p> <p>Results</p> <p>The estimated treatment effect and its 95% confidence interval (CI) from generalized estimating equations (GEE) model based on the CHAT complete dataset are 1.14 (0.76 1.70). When 30% of binary outcome are missing completely at random, a simulation study shows that the estimated treatment effects and the corresponding 95% CIs from GEE model are 1.15 (0.76 1.75) if complete case analysis is used, 1.12 (0.72 1.73) if within-cluster MCMC method is used, 1.21 (0.80 1.81) if across-cluster RE logistic regression is used, and 1.16 (0.82 1.64) if standard logistic regression which does not account for clustering is used.</p> <p>Conclusion</p> <p>When the percentage of missing data is low or intra-cluster correlation coefficient is small, different approaches for handling missing binary outcome data generate quite similar results. When the percentage of missing data is large, standard MI strategies, which do not take into account the intra-cluster correlation, underestimate the variance of the treatment effect. Within-cluster and across-cluster MI strategies (except for random-effects logistic regression MI strategy), which take the intra-cluster correlation into account, seem to be more appropriate to handle the missing outcome from CRTs. Under the same imputation strategy and percentage of missingness, the estimates of the treatment effect from GEE and RE logistic regression models are similar.</p

    Diacylglycerol regulates acute hypoxic pulmonary vasoconstriction via TRPC6

    Get PDF
    Background: Hypoxic pulmonary vasoconstriction (HPV) is an essential mechanism of the lung that matches blood perfusion to alveolar ventilation to optimize gas exchange. Recently we have demonstrated that acute but not sustained HPV is critically dependent on the classical transient receptor potential 6 (TRPC6) channel. However, the mechanism of TRPC6 activation during acute HPV remains elusive. We hypothesize that a diacylglycerol (DAG)-dependent activation of TRPC6 regulates acute HPV. Methods: We investigated the effect of the DAG analog 1-oleoyl-2-acetyl-sn-glycerol (OAG) on normoxic vascular tone in isolated perfused and ventilated mouse lungs from TRPC6-deficient and wild-type mice. Moreover, the effects of OAG, the DAG kinase inhibitor R59949 and the phospholipase C inhibitor U73122 on the strength of HPV were investigated compared to those on non-hypoxia-induced vasoconstriction elicited by the thromboxane mimeticum U46619. Results: OAG increased normoxic vascular tone in lungs from wild-type mice, but not in lungs from TRPC6-deficient mice. Under conditions of repetitive hypoxic ventilation, OAG as well as R59949 dose-dependently attenuated the strength of acute HPV whereas U46619-induced vasoconstrictions were not reduced. Like OAG, R59949 mimicked HPV, since it induced a dose-dependent vasoconstriction during normoxic ventilation. In contrast, U73122, a blocker of DAG synthesis, inhibited acute HPV whereas U73343, the inactive form of U73122, had no effect on HPV. Conclusion: These findings support the conclusion that the TRPC6-dependency of acute HPV is induced via DAG
    • …
    corecore