138 research outputs found

    The systematic guideline review: method, rationale, and test on chronic heart failure

    Get PDF
    Background: Evidence-based guidelines have the potential to improve healthcare. However, their de-novo-development requires substantial resources-especially for complex conditions, and adaptation may be biased by contextually influenced recommendations in source guidelines. In this paper we describe a new approach to guideline development-the systematic guideline review method (SGR), and its application in the development of an evidence-based guideline for family physicians on chronic heart failure (CHF). Methods: A systematic search for guidelines was carried out. Evidence-based guidelines on CHF management in adults in ambulatory care published in English or German between the years 2000 and 2004 were included. Guidelines on acute or right heart failure were excluded. Eligibility was assessed by two reviewers, methodological quality of selected guidelines was appraised using the AGREE instrument, and a framework of relevant clinical questions for diagnostics and treatment was derived. Data were extracted into evidence tables, systematically compared by means of a consistency analysis and synthesized in a preliminary draft. Most relevant primary sources were re-assessed to verify the cited evidence. Evidence and recommendations were summarized in a draft guideline. Results: Of 16 included guidelines five were of good quality. A total of 35 recommendations were systematically compared: 25/35 were consistent, 9/35 inconsistent, and 1/35 un-rateable (derived from a single guideline). Of the 25 consistencies, 14 were based on consensus, seven on evidence and four differed in grading. Major inconsistencies were found in 3/9 of the inconsistent recommendations. We re-evaluated the evidence for 17 recommendations (evidence-based, differing evidence levels and minor inconsistencies) - the majority was congruent. Incongruity was found where the stated evidence could not be verified in the cited primary sources, or where the evaluation in the source guidelines focused on treatment benefits and underestimated the risks. The draft guideline was completed in 8.5 man-months. The main limitation to this study was the lack of a second reviewer. Conclusion: The systematic guideline review including framework development, consistency analysis and validation is an effective, valid, and resource saving-approach to the development of evidence-based guidelines

    Ancestral Components of Admixed Genomes in a Mexican Cohort

    Get PDF
    For most of the world, human genome structure at a population level is shaped by interplay between ancient geographic isolation and more recent demographic shifts, factors that are captured by the concepts of biogeographic ancestry and admixture, respectively. The ancestry of non-admixed individuals can often be traced to a specific population in a precise region, but current approaches for studying admixed individuals generally yield coarse information in which genome ancestry proportions are identified according to continent of origin. Here we introduce a new analytic strategy for this problem that allows fine-grained characterization of admixed individuals with respect to both geographic and genomic coordinates. Ancestry segments from different continents, identified with a probabilistic model, are used to construct and study “virtual genomes” of admixed individuals. We apply this approach to a cohort of 492 parent–offspring trios from Mexico City. The relative contributions from the three continental-level ancestral populations—Africa, Europe, and America—vary substantially between individuals, and the distribution of haplotype block length suggests an admixing time of 10–15 generations. The European and Indigenous American virtual genomes of each Mexican individual can be traced to precise regions within each continent, and they reveal a gradient of Amerindian ancestry between indigenous people of southwestern Mexico and Mayans of the Yucatan Peninsula. This contrasts sharply with the African roots of African Americans, which have been characterized by a uniform mixing of multiple West African populations. We also use the virtual European and Indigenous American genomes to search for the signatures of selection in the ancestral populations, and we identify previously known targets of selection in other populations, as well as new candidate loci. The ability to infer precise ancestral components of admixed genomes will facilitate studies of disease-related phenotypes and will allow new insight into the adaptive and demographic history of indigenous people

    Correction of Population Stratification in Large Multi-Ethnic Association Studies

    Get PDF
    The vast majority of genetic risk factors for complex diseases have, taken individually, a small effect on the end phenotype. Population-based association studies therefore need very large sample sizes to detect significant differences between affected and non-affected individuals. Including thousands of affected individuals in a study requires recruitment in numerous centers, possibly from different geographic regions. Unfortunately such a recruitment strategy is likely to complicate the study design and to generate concerns regarding population stratification.We analyzed 9,751 individuals representing three main ethnic groups - Europeans, Arabs and South Asians - that had been enrolled from 154 centers involving 52 countries for a global case/control study of acute myocardial infarction. All individuals were genotyped at 103 candidate genes using 1,536 SNPs selected with a tagging strategy that captures most of the genetic diversity in different populations. We show that relying solely on self-reported ethnicity is not sufficient to exclude population stratification and we present additional methods to identify and correct for stratification.Our results highlight the importance of carefully addressing population stratification and of carefully “cleaning” the sample prior to analyses to obtain stronger signals of association and to avoid spurious results

    Elite Suppressor–Derived HIV-1 Envelope Glycoproteins Exhibit Reduced Entry Efficiency and Kinetics

    Get PDF
    Elite suppressors (ES) are a rare subset of HIV-1–infected individuals who are able to maintain HIV-1 viral loads below the limit of detection by ultra-sensitive clinical assays in the absence of antiretroviral therapy. Mechanism(s) responsible for this elite control are poorly understood but likely involve both host and viral factors. This study assesses ES plasma-derived envelope glycoprotein (env) fitness as a function of entry efficiency as a possible contributor to viral suppression. Fitness of virus entry was first evaluated using a novel inducible cell line with controlled surface expression levels of CD4 (receptor) and CCR5 (co-receptor). In the context of physiologic CCR5 and CD4 surface densities, ES envs exhibited significantly decreased entry efficiency relative to chronically infected viremic progressors. ES envs also demonstrated slow entry kinetics indicating the presence of virus with reduced entry fitness. Overall, ES env clones were less efficient at mediating entry than chronic progressor envs. Interestingly, acute infection envs exhibited an intermediate phenotypic pattern not distinctly different from ES or chronic progressor envs. These results imply that lower env fitness may be established early and may directly contribute to viral suppression in ES individuals

    Protection in Macaques Immunized with HIV-1 Candidate Vaccines Can Be Predicted Using the Kinetics of Their Neutralizing Antibodies

    Get PDF
    A vaccine is needed to control the spread of human immunodeficiency virus type 1 (HIV-1). An in vitro assay that can predict the protection induced by a vaccine would facilitate the development of such a vaccine. A potential candidate would be an assay to quantify neutralization of HIV-1.We have used sera from rhesus macaques that have been immunized with HIV candidate vaccines and subsequently challenged with simian human immunodeficiency virus (SHIV). We compared neutralization assays with different formats. In experiments with the standardized and validated TZMbl assay, neutralizing antibody titers against homologous SHIV(SF162P4) pseudovirus gave a variable correlation with reductions in plasma viremia levels. The target cells used in the assays are not just passive indicators of virus infection but are actively involved in the neutralization process. When replicating virus was used with GHOST cell assays, events during the absorption phase, as well as the incubation phase, determine the level of neutralization. Sera that are associated with protection have properties that are closest to the traditional concept of neutralization: the concentration of antibody present during the absorption phase has no effect on the inactivation rate. In GHOST assays, events during the absorption phase may inactivate a fixed number, rather than a proportion, of virus so that while complete neutralization can be obtained, it can only be found at low doses particularly with isolates that are relatively resistant to neutralization.Two scenarios have the potential to predict protection by neutralizing antibodies at concentrations that can be induced by vaccination: antibodies that have properties close to the traditional concept of neutralization may protect against a range of challenge doses of neutralization sensitive HIV isolates; a window of opportunity also exists for protection against isolates that are more resistant to neutralization but only at low challenge doses

    International Network for Comparison of HIV Neutralization Assays: The NeutNet Report

    Get PDF
    BACKGROUND: Neutralizing antibody assessments play a central role in human immunodeficiency virus type-1 (HIV-1) vaccine development but it is unclear which assay, or combination of assays, will provide reliable measures of correlates of protection. To address this, an international collaboration (NeutNet) involving 18 independent participants was organized to compare different assays. METHODS: Each laboratory evaluated four neutralizing reagents (TriMab, 447-52D, 4E10, sCD4) at a given range of concentrations against a panel of 11 viruses representing a wide range of genetic subtypes and phenotypes. A total of 16 different assays were compared. The assays utilized either uncloned virus produced in peripheral blood mononuclear cells (PBMCs) (virus infectivity assays, VI assays), or their Env-pseudotyped (gp160) derivatives produced in 293T cells (PSV assays) from molecular clones or uncloned virus. Target cells included PBMC and genetically-engineered cell lines in either a single- or multiple-cycle infection format. Infection was quantified by using a range of assay read-outs that included extracellular or intracellular p24 antigen detection, RNA quantification and luciferase and beta-galactosidase reporter gene expression. FINDINGS: PSV assays were generally more sensitive than VI assays, but there were important differences according to the virus and inhibitor used. For example, for TriMab, the mean IC50 was always lower in PSV than in VI assays. However, with 4E10 or sCD4 some viruses were neutralized with a lower IC50 in VI assays than in the PSV assays. Inter-laboratory concordance was slightly better for PSV than for VI assays with some viruses, but for other viruses agreement between laboratories was limited and depended on both the virus and the neutralizing reagent. CONCLUSIONS: The NeutNet project demonstrated clear differences in assay sensitivity that were dependent on both the neutralizing reagent and the virus. No single assay was capable of detecting the entire spectrum of neutralizing activities. Since it is not known which in vitro assay correlates with in vivo protection, a range of neutralization assays is recommended for vaccine evaluation

    Genetic Ancestry, Social Classification, and Racial Inequalities in Blood Pressure in Southeastern Puerto Rico

    Get PDF
    The role of race in human genetics and biomedical research is among the most contested issues in science. Much debate centers on the relative importance of genetic versus sociocultural factors in explaining racial inequalities in health. However, few studies integrate genetic and sociocultural data to test competing explanations directly.We draw on ethnographic, epidemiologic, and genetic data collected in Southeastern Puerto Rico to isolate two distinct variables for which race is often used as a proxy: genetic ancestry versus social classification. We show that color, an aspect of social classification based on the culturally defined meaning of race in Puerto Rico, better predicts blood pressure than does a genetic-based estimate of continental ancestry. We also find that incorporating sociocultural variables reveals a new and significant association between a candidate gene polymorphism for hypertension (alpha(2C) adrenergic receptor deletion) and blood pressure.This study addresses the recognized need to measure both genetic and sociocultural factors in research on racial inequalities in health. Our preliminary results provide the most direct evidence to date that previously reported associations between genetic ancestry and health may be attributable to sociocultural factors related to race and racism, rather than to functional genetic differences between racially defined groups. Our results also imply that including sociocultural variables in future research may improve our ability to detect significant allele-phenotype associations. Thus, measuring sociocultural factors related to race may both empower future genetic association studies and help to clarify the biological consequences of social inequalities

    The use of race, ethnicity and ancestry in human genetic research

    Get PDF
    Post-Human Genome Project progress has enabled a new wave of population genetic research, and intensified controversy over the use of race/ethnicity in this work. At the same time, the development of methods for inferring genetic ancestry offers more empirical means of assigning group labels. Here, we provide a systematic analysis of the use of race/ethnicity and ancestry in current genetic research. We base our analysis on key published recommendations for the use and reporting of race/ethnicity which advise that researchers: explain why the terms/categories were used and how they were measured, carefully define them, and apply them consistently. We studied 170 population genetic research articles from high impact journals, published 2008–2009. A comparative perspective was obtained by aligning study metrics with similar research from articles published 2001–2004. Our analysis indicates a marked improvement in compliance with some of the recommendations/guidelines for the use of race/ethnicity over time, while showing that important shortfalls still remain: no article using ‘race’, ‘ethnicity’ or ‘ancestry’ defined or discussed the meaning of these concepts in context; a third of articles still do not provide a rationale for their use, with those using ‘ancestry’ being the least likely to do so. Further, no article discussed potential socio-ethical implications of the reported research. As such, there remains a clear imperative for highlighting the importance of consistent and comprehensive reporting on human populations to the genetics/genomics community globally, to generate explicit guidelines for the uses of ancestry and genetic ancestry, and importantly, to ensure that guidelines are followed
    corecore