521 research outputs found
Population variation and differences in serum leptin independent of adiposity: a comparison of Ache Amerindian men of Paraguay and lean American male distance runners
BACKGROUND: Serum leptin variation is commonly associated with fat percentage (%), body mass index (BMI), and activity. In this investigation, we report population differences in mean leptin levels in healthy men as well as associations with fat % and BMI that are independent of these factors and reflect likely variation resulting from chronic environmental conditions. METHODS: Serum leptin levels, fat %, and BMI were compared between lean American distance runners and healthy Ache Native Americans of Paraguay. Mean levels were compared as were the regressions between fat %, BMI, and leptin. Comparisons were performed between male American distance runners (n = 13, mean age 32.2 ± 9.2 SD) and highly active male New World indigenous population (Ache of Paraguay, n = 20, mean age 32.8 ± 9.2) in order to determine whether significant population variation in leptin is evident in physically active populations living under different ecological circumstances independent of adiposity and BMI. RESULTS: While the Ache were hypothesized to exhibit higher leptin due to significantly greater adiposity (fat %, Ache 17.9 ± 1.8 SD; runners 9.7 ± 3.2, p < 0.0001), leptin levels were nonetheless significantly higher in American runners (Ache 1.13 ng/ml ± 0.38 SD; runners 2.19 ± 1.15; p < 0.007). Significant differences in the association between leptin and fat % was also evident between Ache and runner men. Although fat % was significantly related with leptin in runners (r = 0.90, p < 0.0001) fat % was negatively related in Ache men (r = -0.50, p < 0.03). CONCLUSION: These results illustrate that chronic ecological conditions in addition to activity are likely factors that contribute to population variation in leptin levels and physiology. Population variation independent of adiposity should be considered to be an important source of variation, especially in light of ethnic and population differences in the incidence and etiology of obesity, diabetes, and other metabolic conditions
ANALYSIS OF THE REQUIREMENTS FOR HUMAN TOLL-LIKE RECEPTOR 3 DOMINANT NEGATIVITY AND SIGNAL TRANSDUCTION
Toll-like receptors are an important part of the innate immune system and mediate infection via the recognition of pathogen-associated molecular patterns (PAMPs). Toll-like receptor 3 (TLR3) recognizes foreign-derived double stranded RNA as its ligand, and is active as a homodimer. Previous research has indicated that specific residues in TLR3’s extracellular domain (ECD) are responsible for dimer-dimer interactions between TLR3s, and the apparent specificity of this interaction has allowed for modulation of TLR3 signal through the use of dominant negative mutants. Here we present a class of mutants which lack the inter-disulfide cap region of the ECD (Δ123-635, hereafter called TLR3N-CT), yet still exhibit dominant negative properties. The degree of dominant negative inhibition of TLR3N-CT is comparable to that of TLR3 ΔToll interleukin-1 receptor (TIR), the previously established standard for dominant negativity. Tyrosine mutants, such as Y759F, have been shown to dramatically reduce TLR3 signal induction by interfering with cytoplasmic signaling adapters. Our mutant, TLR3N-CT Y759F, retained the ability to act as a dominant negative inhibitor of TLR3, thus indicating that the observed reduction in induced/uninduced signal was not due to ligand-independent activation of the mutant. Furthermore, the mutant TLR3N-CTΔTIR was generated to investigate the role of the cytoplasmic TIR domain in dimer-dimer interactions. This mutant was not a dominant negative inhibitor of TLR3 activity, indicating a possible role of the TIR domain in the dominant negative interaction between TLR3N-CT and wild type TLR3. It is possible that this TIR-TIR interaction is either in the incorrect confirmation for signaling or, contrary to previous reports that ligand binding and dimerization are necessary only to bring the TIR domains together, that more than simple TIR-TIR interaction is required for TLR3 signaling. However, expression studies by western blot have been unable to prove expression by any of the mutants previously discussed. Several explanations are possible, but it is likely that expression levels are sufficient for cell-based activity assays but too low for western blot detection
The Kanyakla study: Randomized controlled trial of a microclinic social network intervention for promoting engagement and retention in HIV care in rural western Kenya
BACKGROUND: Existing social relationships are a potential source of social capital that can enhance support for sustained retention in HIV care. A previous pilot study of a social network-based \u27microclinic\u27 intervention, including group health education and facilitated HIV status disclosure, reduced disengagement from HIV care. We conducted a pragmatic randomized trial to evaluate microclinic effectiveness.
METHODS: In nine rural health facilities in western Kenya, we randomized HIV-positive adults with a recent missed clinic visit to either participation in a microclinic or usual care (NCT02474992). We collected visit data at all clinics where participants accessed care and evaluated intervention effect on disengagement from care (≥90-day absence from care after a missed visit) and the proportion of time patients were adherent to clinic visits (\u27time-in-care\u27). We also evaluated changes in social support, HIV status disclosure, and HIV-associated stigma.
RESULTS: Of 350 eligible patients, 304 (87%) enrolled, with 154 randomized to intervention and 150 to control. Over one year of follow-up, disengagement from care was similar in intervention and control (18% vs 17%, hazard ratio 1.03, 95% CI 0.61-1.75), as was time-in-care (risk difference -2.8%, 95% CI -10.0% to +4.5%). The intervention improved social support for attending clinic appointments (+0.4 units on 5-point scale, 95% CI 0.08-0.63), HIV status disclosure to close social supports (+0.3 persons, 95% CI 0.2-0.5), and reduced stigma (-0.3 units on 5-point scale, 95% CI -0.40 to -0.17).
CONCLUSIONS: The data from our pragmatic randomized trial in rural western Kenya are compatible with the null hypothesis of no difference in HIV care engagement between those who participated in a microclinic intervention and those who did not, despite improvements in proposed intervention mechanisms of action. However, some benefit or harm cannot be ruled out because the confidence intervals were wide. Results differ from a prior quasi-experimental pilot study, highlighting important implementation considerations when evaluating complex social interventions for HIV care.
TRIAL REGISTRATION: Clinical trial number: NCT02474992
Spatial finance:practical and theoretical contributions to financial analysis
We introduce and define a new concept, ‘Spatial Finance’, as the integration of geospatial data and analysis into financial theory and practice, and describe how developments in earth observation, particularly as the result of new satellite constellations, combined with new artificial intelligence methods and cloud computing, create a plethora of potential applications for Spatial Finance. We argue that Spatial Finance will become a core future competency for financial analysis, and this will have significant implications for information markets, risk modelling and management, valuation modelling, and the identification of investment opportunities. The paper reviews the characteristics of geospatial data and related technology developments, some current and future applications of Spatial Finance, and its potential impact on financial theory and practice
A Common Dataset for Genomic Analysis of Livestock Populations
Although common datasets are an important resource for the scientific community and can be used to address important questions, genomic datasets of a meaningful size have not generally been available in livestock species. We describe a pig dataset that PIC (a Genus company) has made available for comparing genomic prediction methods. We also describe genomic evaluation of the data using methods that PIC considers best practice for predicting and validating genomic breeding values, and we discuss the impact of data structure on accuracy. The dataset contains 3534 individuals with high-density genotypes, phenotypes, and estimated breeding values for five traits. Genomic breeding values were calculated using BayesB, with phenotypes and de-regressed breeding values, and using a single-step genomic BLUP approach that combines information from genotyped and un-genotyped animals. The genomic breeding value accuracy increased with increased trait heritability and with increased relationship between training and validation. In nearly all cases, BayesB using de-regressed breeding values outperformed the other approaches, but the single-step evaluation performed only slightly worse. This dataset was useful for comparing methods for genomic prediction using real data. Our results indicate that validation approaches accounting for relatedness between populations can correct for potential overestimation of genomic breeding value accuracies, with implications for genotyping strategies to carry out genomic selection programs
Assessment of alternative genotyping strategies to maximize imputation accuracy at minimal cost
BACKGROUND: Commercial breeding programs seek to maximise the rate of genetic gain while minimizing the costs of attaining that gain. Genomic information offers great potential to increase rates of genetic gain but it is expensive to generate. Low-cost genotyping strategies combined with genotype imputation offer dramatically reduced costs. However, both the costs and accuracy of imputation of these strategies are highly sensitive to several factors. The objective of this paper was to explore the cost and imputation accuracy of several alternative genotyping strategies in pedigreed populations. METHODS: Pedigree and genotype data from a commercial pig population were used. Several alternative genotyping strategies were explored. The strategies differed in the density of genotypes used for the ancestors and the individuals to be imputed. Parents, grandparents, and other relatives that were not descendants, were genotyped at high-density, low-density, or extremely low-density, and associated costs and imputation accuracies were evaluated. RESULTS: Imputation accuracy and cost were influenced by the alternative genotyping strategies. Given the mating ratios and the numbers of offspring produced by males and females, an optimized low-cost genotyping strategy for a commercial pig population could involve genotyping male parents at high-density, female parents at low-density (e.g. 3000 SNP), and selection candidates at very low-density (384 SNP). CONCLUSIONS: Among the selection candidates, 95.5 % and 93.5 % of the genotype variation contained in the high-density SNP panels were recovered using a genotyping strategy that costs respectively, 20.58 per candidate
Improving Event Time Prediction by Learning to Partition the Event Time Space
Recently developed survival analysis methods improve upon existing approaches
by predicting the probability of event occurrence in each of a number
pre-specified (discrete) time intervals. By avoiding placing strong parametric
assumptions on the event density, this approach tends to improve prediction
performance, particularly when data are plentiful. However, in clinical
settings with limited available data, it is often preferable to judiciously
partition the event time space into a limited number of intervals well suited
to the prediction task at hand. In this work, we develop a method to learn from
data a set of cut points defining such a partition. We show that in two
simulated datasets, we are able to recover intervals that match the underlying
generative model. We then demonstrate improved prediction performance on three
real-world observational datasets, including a large, newly harmonized stroke
risk prediction dataset. Finally, we argue that our approach facilitates
clinical decision-making by suggesting time intervals that are most appropriate
for each task, in the sense that they facilitate more accurate risk prediction.Comment: 16 pages, 5 figures, 2 table
A review of statistical updating methods for clinical prediction models
A clinical prediction model (CPM) is a tool for predicting healthcare outcomes, usually within a specific population and context. A common approach is to develop a new CPM for each population and context, however, this wastes potentially useful historical information. A better approach is to update or incorporate the existing CPMs already developed for use in similar contexts or populations. In addition, CPMs commonly become miscalibrated over time, and need replacing or updating. In this paper we review a range of approaches for re-using and updating CPMs; these fall in three main categories: simple coefficient updating; combining multiple previous CPMs in a meta-model; and dynamic updating of models. We evaluated the performance (discrimination and calibration) of the different strategies using data on mortality following cardiac surgery in the UK: We found that no single strategy performed sufficiently well to be used to the exclusion of the others. In conclusion, useful tools exist for updating existing CPMs to a new population or context, and these should be implemented rather than developing a new CPM from scratch, using a breadth of complementary statistical methods
- …