587 research outputs found

    A National Concept Dictionary

    Get PDF
    Overall objectives or goal Most of the organizations that use population administrative data for research purposes have internal repository of validated definitions and algorithms of their own. Many of these concepts and definitions are applicable or at least adaptable to other organizations and jurisdictions. A comprehensive National (and potentially International) Concept Dictionary could help investigators to carry out methodologically sound work using consistent and validated algorithms using a shared pool of knowledge and resources. The Institute for Clinical Evaluative Sciences (ICES) in Ontario, Canada has recently modernized its internal Concept Dictionary by adopting standard templates based on the Manitoba Centre for Health Policy (MCHP) Concept Dictionary, reviewing and updating existing content and tagging the concept entries with appropriate MeSH terms and data sources, and adding standard computer code (e.g., SAS coding) where appropriate. A SharePointĀ® web-based application has been developed to provide advanced tagging, searching and browsing features. We envision a wiki-based Concept Dictionary hosted on a cloud-based environment with very granular access controls to provide enough flexibility for each participating organization to control their own content. This means each organization will be able to decide on how to share their own concepts (or part of them) with the public or internal users. All content will be tagged with MeSH terms and as well with the organizationā€™s name that initially posts each entry. Other organizations which find the same concept applicable to their own use can tag the same entry with their organization name or refer to a secondary adapted entry if adaptation to fit their data and methodologies is required. The Search feature will allow refining the search criteria by MeSH terms, data sources, and also organization/jurisdiction name. Multiple layers of access controls will allow each organization to have their own groups of users with different standard privileges such as Local Administrators, Authors and Approvers (or Publishers). The Approver (Publisher) users within each organization can publish each entry for internal or public view. This way, for example, a definition/algorithm can be viewable only within the organization until the validation process is complete, and then the entry can be made publically available, while some sections, such as computer code, can remain restricted to the organization. We will discuss challenges in developing and maintaining such a platform including the costs, governance, intellectual property rights, copyrights and liabilities for the participating organizations. The intended output or outcome We aim to use this opportunity to form a working group from the interested organizations that are ready to participate and commit in developing this collaborative platform. After the conference, there will be follow up sessions with the members of the working group to plan and develop the online application

    Improving traffic-related air pollution estimates by modelling minor road traffic volumes

    Get PDF
    Accurately estimating annual average daily traffic (AADT) on minor roads is essential for assessing traffic-related air pollution (TRAP) exposure, particularly in areas where most people live. Our study assessed the direct and indirect external validity of three methods used to estimate AADT on minor roads in Melbourne, Australia. We estimated the minor road AADT using a fixed-value approach (assuming 600 vehicles/day) and linear and negative binomial (NB) models. The models were generated using road type, road importance index, AADT and distance of the nearest major road, population density, workplace density, and weighted road density. External measurements of traffic counts, as well as black carbon (BC) and ultrafine particles (UFP), were conducted at 201 sites for direct and indirect validation, respectively. Statistical tests included Akaike information criterion (AIC) to compare modelsā€™ performance, the concordance correlation coefficient (CCC) for direct validation, and Spearmanā€™s correlation coefficient for indirect validation. Results show that 88.5% of the roads in Melbourne are minor, yet only 18.9% have AADT. The performance assessment of minor road models indicated comparable performance for both models (AIC of 1,023,686 vs. 1,058,502). In the direct validation with external traffic measurements, there was no difference between the three methods for overall minor roads. However, for minor roads within residential areas, CCC (95% confidence interval [CI]) values were āˆ’ 0.001 (āˆ’ 0.17; 0.18), 0.47 (0.32; 0.60), and 0.29 (0.18; 0.39) for the fixed-value approach, the linear model, and the NB model, respectively. In the indirect validation, we found differences only on UFP where the Spearmanā€™s correlation (95% CI) for both models and fixed-value approach were 0.50 (0.37; 0.62) and 0.34 (0.19; 0.48), respectively. In conclusion, our linear model outperformed the fixed-value approach when compared against traffic and TRAP measurements. The methodology followed in this study is relevant to locations with incomplete minor road AADT data

    Improving traffic-related air pollution estimates by modelling minor road traffic volumes

    Get PDF
    Accurately estimating annual average daily traffic (AADT) on minor roads is essential for assessing traffic-related air pollution (TRAP) exposure, particularly in areas where most people live. Our study assessed the direct and indirect external validity of three methods used to estimate AADT on minor roads in Melbourne, Australia. We estimated the minor road AADT using a fixed-value approach (assuming 600 vehicles/day) and linear and negative binomial (NB) models. The models were generated using road type, road importance index, AADT and distance of the nearest major road, population density, workplace density, and weighted road density. External measurements of traffic counts, as well as black carbon (BC) and ultrafine particles (UFP), were conducted at 201 sites for direct and indirect validation, respectively. Statistical tests included Akaike information criterion (AIC) to compare models' performance, the concordance correlation coefficient (CCC) for direct validation, and Spearman's correlation coefficient for indirect validation. Results show that 88.5% of the roads in Melbourne are minor, yet only 18.9% have AADT. The performance assessment of minor road models indicated comparable performance for both models (AIC of 1,023,686 vs. 1,058,502). In the direct validation with external traffic measurements, there was no difference between the three methods for overall minor roads. However, for minor roads within residential areas, CCC (95% confidence interval [CI]) values were -0.001 (-0.17; 0.18), 0.47 (0.32; 0.60), and 0.29 (0.18; 0.39) for the fixed-value approach, the linear model, and the NB model, respectively. In the indirect validation, we found differences only on UFP where the Spearman's correlation (95% CI) for both models and fixed-value approach were 0.50 (0.37; 0.62) and 0.34 (0.19; 0.48), respectively. In conclusion, our linear model outperformed the fixed-value approach when compared against traffic and TRAP measurements. The methodology followed in this study is relevant to locations with incomplete minor road AADT data

    Incomplete quality of life data in lung transplant research: comparing cross sectional, repeated measures ANOVA, and multi-level analysis

    Get PDF
    BACKGROUND: In longitudinal studies on Health Related Quality of Life (HRQL) it frequently occurs that patients have one or more missing forms, which may cause bias, and reduce the sample size. Aims of the present study were to address the problem of missing data in the field of lung transplantation (LgTX) and HRQL, to compare results obtained with different methods of analysis, and to show the value of each type of statistical method used to summarize data. METHODS: Results from cross-sectional analysis, repeated measures on complete cases (ANOVA), and a multi-level analysis were compared. The scores on the dimension 'energy' of the Nottingham Health Profile (NHP) after transplantation were used to illustrate the differences between methods. RESULTS: Compared to repeated measures ANOVA, the cross-sectional and multi-level analysis included more patients, and allowed for a longer period of follow-up. In contrast to the cross sectional analyses, in the complete case analysis, and the multi-level analysis, the correlation between different time points was taken into account. Patterns over time of the three methods were comparable. In general, results from repeated measures ANOVA showed the most favorable energy scores, and results from the multi-level analysis the least favorable. Due to the separate subgroups per time point in the cross-sectional analysis, and the relatively small number of patients in the repeated measures ANOVA, inclusion of predictors was only possible in the multi-level analysis. CONCLUSION: Results obtained with the various methods of analysis differed, indicating some reduction of bias took place. Multi-level analysis is a useful approach to study changes over time in a data set where missing data, to reduce bias, make efficient use of available data, and to include predictors, in studies concerning the effects of LgTX on HRQL

    Contrasting signals of positive selection in genes involved in human skin color variation from tests based on SNP scans and resequencing

    Get PDF
    RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are.Abstract Background Numerous genome-wide scans conducted by genotyping previously ascertained single-nucleotide polymorphisms (SNPs) have provided candidate signatures for positive selection in various regions of the human genome, including in genes involved in pigmentation traits. However, it is unclear how well the signatures discovered by such haplotype-based test statistics can be reproduced in tests based on full resequencing data. Four genes (oculocutaneous albinism II (OCA2), tyrosinase-related protein 1 (TYRP1), dopachrome tautomerase (DCT), and KIT ligand (KITLG)) implicated in human skin-color variation, have shown evidence for positive selection in Europeans and East Asians in previous SNP-scan data. In the current study, we resequenced 4.7 to 6.7 kb of DNA from each of these genes in Africans, Europeans, East Asians, and South Asians. Results Applying all commonly used neutrality-test statistics for allele frequency distribution to the newly generated sequence data provided conflicting results regarding evidence for positive selection. Previous haplotype-based findings could not be clearly confirmed. Although some tests were marginally significant for some populations and genes, none of them were significant after multiple-testing correction. Combined P values for each gene-population pair did not improve these results. Application of Approximate Bayesian Computation Markov chain Monte Carlo based to these sequence data using a simple forward simulator revealed broad posterior distributions of the selective parameters for all four genes, providing no support for positive selection. However, when we applied this approach to published sequence data on SLC45A2, another human pigmentation candidate gene, we could readily confirm evidence for positive selection, as previously detected with sequence-based and some haplotype-based tests. Conclusions Overall, our data indicate that even genes that are strong biological candidates for positive selection and show reproducible signatures of positive selection in SNP scans do not always show the same replicability of selection signals in other tests, which should be considered in future studies on detecting positive selection in genetic data.Published versio

    The Ultrashort Mental Health Screening Tool Is a Valid and Reliable Measure With Added Value to Support Decision-making

    Get PDF
    BACKGROUND: Mental health influences symptoms, outcomes, and decision-making in musculoskeletal healthcare. Implementing measures of mental health in clinical practice can be challenging. An ultrashort screening tool for mental health with a low burden is currently unavailable but could be used as a conversation starter, expectation management tool, or decision support tool.QUESTIONS/PURPOSES: (1) Which items of the Pain Catastrophizing Scale (PCS), Patient Health Questionnaire (PHQ-4), and Brief Illness Perception Questionnaire (B-IPQ) are the most discriminative and yield a high correlation with the total scores of these questionnaires? (2) What is the construct validity and added clinical value (explained variance for pain and hand function) of an ultrashort four-item mental health screening tool? (3) What is the test-retest reliability of the screening tool? (4) What is the response time for the ultrashort screening tool?METHODS: This was a prospective cohort study. Data collection was part of usual care at Xpert Clinics, the Netherlands, but prospective measurements were added to this study. Between September 2017 and January 2022, we included 19,156 patients with hand and wrist conditions. We subdivided these into four samples: a test set to select the screener items (n = 18,034), a validation set to determine whether the selected items were solid (n = 1017), a sample to determine the added clinical value (explained variance for pain and hand function, n = 13,061), and a sample to assess the test-retest reliability (n = 105). Patients were eligible for either sample if they completed all relevant measurements of interest for that particular sample. To create an ultrashort screening tool that is valid, reliable, and has added value, we began by picking the most discriminatory items (that is, the items that were most influential for determining the total score) from the PCS, PHQ-4, and B-IPQ using chi-square automated interaction detection (a machine-learning algorithm). To assess construct validity (how well our screening tool assesses the constructs of interest), we correlated these items with the associated sum score of the full questionnaire in the test and validation sets. We compared the explained variance of linear models for pain and function using the screening tool items or the original sum scores of the PCS, PHQ-4, and B-IPQ to further assess the screening tool's construct validity and added value. We evaluated test-retest reliability by calculating weighted kappas, ICCs, and the standard error of measurement.RESULTS: We identified four items and used these in the screening tool. The screening tool items were highly correlated with the PCS (Pearson coefficient = 0.82; p &lt; 0.001), PHQ-4 (0.87; p &lt; 0.001), and B-IPQ (0.85; p &lt; 0.001) sum scores, indicating high construct validity. The full questionnaires explained only slightly more variance in pain and function (10% to 22%) than the screening tool did (9% to 17%), again indicating high construct validity and much added clinical value of the screening tool. Test-retest reliability was high for the PCS (ICC 0.75, weighted kappa 0.75) and B-IPQ (ICC 0.70 to 0.75, standard error of measurement 1.3 to 1.4) items and moderate for the PHQ-4 item (ICC 0.54, weighted kappa 0.54). The median response time was 43 seconds, against more than 4 minutes for the full questionnaires.CONCLUSION: Our ultrashort, valid, and reliable screening tool for pain catastrophizing, psychologic distress, and illness perception can be used before clinician consultation and may serve as a conversation starter, an expectation management tool, or a decision support tool. The clinical utility of the screening tool is that it can indicate that further testing is warranted, guide a clinician when considering a consultation with a mental health specialist, or support a clinician in choosing between more invasive and less invasive treatments. Future studies could investigate how the tool can be used optimally and whether using the screening tool affects daily clinic decisions.LEVEL OF EVIDENCE: Level II, diagnostic study.</p

    An open label, dose response study to determine the effect of a dietary supplement on dihydrotestosterone, testosterone and estradiol levels in healthy males

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Maintaining endogenous testosterone (T) levels as men age may slow the symptoms of sarcopenia, andropause and decline in physical performance. Drugs inhibiting the enzyme 5Ī±-reductase (5AR) produce increased blood levels of T and decreased levels of dihydrotestosterone (DHT). However, symptoms of gynecomastia have been reported due to the aromatase (AER) enzyme converting excess T to estradiol (ES). The carotenoid astaxanthin (AX) from <it>Haematococcus pluvialis</it>, Saw Palmetto berry lipid extract (SPLE) from <it>Serenoa repens </it>and the precise combination of these dietary supplements, Alphastat<sup>Ā® </sup>(Mytosterone(ā„¢)), have been reported to have inhibitory effects on both 5AR and AER in-vitro. Concomitant regulation of both enzymes in-vivo would cause DHT and ES blood levels to decrease and T levels to increase. The purpose of this clinical study was to determine if patented Alphastat<sup>Ā® </sup>(Mytosterone(ā„¢)) could produce these effects in a dose dependent manner.</p> <p>Methods</p> <p>To investigate this clinically, 42 healthy males ages 37 to 70 years were divided into two groups of twenty-one and dosed with either 800 mg/day or 2000 mg/day of Alphastat<sup>Ā® </sup>(Mytosterone(ā„¢)) for fourteen days. Blood samples were collected on days 0, 3, 7 and 14 and assayed for T, DHT and ES. Body weight and blood pressure data were collected prior to blood collection. One-way, repeated measures analysis of variance (ANOVA-RM) was performed at a significance level of alpha = 0.05 to determine differences from baseline within each group. Two-way analysis of variance (ANOVA-2) was performed after baseline subtraction, at a significance level of alpha = 0.05 to determine differences between dose groups. Results are expressed as means Ā± SEM.</p> <p>Results</p> <p>ANOVA-RM showed significant within group increases in serum total T and significant decreases in serum DHT from baseline in both dose groups at a significance level of alpha = 0.05. Significant decreases in serum ES are reported for the 2000 mg/day dose group and not the 800 mg/day dose group. Significant within group effects were confirmed using ANOVA-2 analyses after baseline subtraction. ANOVA-2 analyses also showed no significant difference between dose groups with regard to the increase of T or the decrease of DHT. It did show a significant dose dependant decrease in serum ES levels.</p> <p>Conclusion</p> <p>Both dose groups showed significant (p = 0.05) increases in T and decreases in DHT within three days of treatment with Alphastat<sup>Ā® </sup>(Mytosterone(ā„¢)). Between group statistical analysis showed no significant (p = 0.05) difference, indicating the effect was not dose dependent and that 800 mg/per day is equally effective as 2000 mg/day for increasing T and lowering DHT. Blood levels of ES however, decreased significantly (p = 0.05) in the 2000 mg/day dose group but not in the 800 mg/day dose group indicating a dose dependant decrease in E levels.</p

    A systematic comparison of linear regression-based statistical methods to assess exposome-health associations

    No full text
    BACKGROUND: The exposome constitutes a promising framework to better understand the effect of environmental exposures on health by explicitly considering multiple testing and avoiding selective reporting. However, exposome studies are challenged by the simultaneous consideration of many correlated exposures. OBJECTIVES: We compared the performances of linear regression-based statistical methods in assessing exposome-health associations. METHODS: In a simulation study, we generated 237 exposure covariates with a realistic correlation structure, and a health outcome linearly related to 0 to 25 of these covariates. Statistical methods were compared primarily in terms of false discovery proportion (FDP) and sensitivity. RESULTS: On average over all simulation settings, the elastic net and sparse partial least-squares regression showed a sensitivity of 76% and a FDP of 44%; Graphical Unit Evolutionary Stochastic Search (GUESS) and the deletion/substitution/addition (DSA) algorithm a sensitivity of 80% and a FDP of 33%. The environment-wide association study (EWAS) underperformed these methods in terms of FDP (average FDP, 86%), despite a higher sensitivity. Performances decreased considerably when assuming an exposome exposure matrix with high levels of correlation between covariates. CONCLUSIONS: Correlation between exposures is a challenge for exposome research, and the statistical methods investigated in this study are limited in their ability to efficiently differentiate true predictors from correlated covariates in a realistic exposome context. While GUESS and DSA provided a marginally better balance between sensitivity and FDP, they did not outperform the other multivariate methods across all scenarios and properties examined, and computational complexity and flexibility should also be considered when choosing between these methods
    • ā€¦
    corecore