191 research outputs found

    A novel approach to simulate gene-environment interactions in complex diseases

    Get PDF
    Background: Complex diseases are multifactorial traits caused by both genetic and environmental factors. They represent the major part of human diseases and include those with largest prevalence and mortality (cancer, heart disease, obesity, etc.). Despite a large amount of information that has been collected about both genetic and environmental risk factors, there are few examples of studies on their interactions in epidemiological literature. One reason can be the incomplete knowledge of the power of statistical methods designed to search for risk factors and their interactions in these data sets. An improvement in this direction would lead to a better understanding and description of gene-environment interactions. To this aim, a possible strategy is to challenge the different statistical methods against data sets where the underlying phenomenon is completely known and fully controllable, for example simulated ones. Results: We present a mathematical approach that models gene-environment interactions. By this method it is possible to generate simulated populations having gene-environment interactions of any form, involving any number of genetic and environmental factors and also allowing non-linear interactions as epistasis. In particular, we implemented a simple version of this model in a Gene-Environment iNteraction Simulator (GENS), a tool designed to simulate case-control data sets where a one gene-one environment interaction influences the disease risk. The main aim has been to allow the input of population characteristics by using standard epidemiological measures and to implement constraints to make the simulator behaviour biologically meaningful. Conclusions: By the multi-logistic model implemented in GENS it is possible to simulate case-control samples of complex disease where gene-environment interactions influence the disease risk. The user has full control of the main characteristics of the simulated population and a Monte Carlo process allows random variability. A knowledge-based approach reduces the complexity of the mathematical model by using reasonable biological constraints and makes the simulation more understandable in biological terms. Simulated data sets can be used for the assessment of novel statistical methods or for the evaluation of the statistical power when designing a study

    Illness perceptions and explanatory models of viral hepatitis B & C among immigrants and refugees: a narrative systematic review.

    Get PDF
    © 2015 Owiti et al.; licensee BioMed Central. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.BACKGROUND: Hepatitis B and C (HBV, HCV) infections are associated with high morbidity and mortality. Many countries with traditionally low prevalence (such as UK) are now planning interventions (screening, vaccination, and treatment) of high-risk immigrants from countries with high prevalence. This review aimed to synthesise the evidence on immigrants' knowledge of HBV and HCV that might influence the uptake of clinical interventions. The review was also used to inform the design and successful delivery of a randomised controlled trial of targeted screening and treatment. METHODS: Five databases (PubMed, CINHAL, SOCIOFILE, PsycINFO & Web of Science) were systematically searched, supplemented by reference tracking, searches of selected journals, and of relevant websites. We aimed to identify qualitative and quantitative studies that investigated knowledge of HBV and HCV among immigrants from high endemic areas to low endemic areas. Evidence, extracted according to a conceptual framework of Kleinman's explanatory model, was subjected to narrative synthesis. We adapted the PEN-3 model to categorise and analyse themes, and recommend strategies for interventions to influence help-seeking behaviour. RESULTS: We identified 51 publications including quantitative (n = 39), qualitative (n = 11), and mixed methods (n = 1) designs. Most of the quantitative studies included small samples and had heterogeneous methods and outcomes. The studies mainly concentrated on hepatitis B and ethnic groups of South East Asian immigrants residing in USA, Canada, and Australia. Many immigrants lacked adequate knowledge of aetiology, symptoms, transmission risk factors, prevention strategies, and treatment, of hepatitis HBV and HCV. Ethnicity, gender, better education, higher income, and English proficiency influenced variations in levels and forms of knowledge. CONCLUSION: Immigrants are vulnerable to HBV and HCV, and risk life-threatening complications from these infections because of poor knowledge and help-seeking behaviour. Primary studies in this area are extremely diverse and of variable quality precluding meta-analysis. Further research is needed outside North America and Australia

    The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases

    Get PDF
    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association studies using the case-control design, the application of a combination of several methods, including the set association approach, MDR and the random forests approach, will likely be a useful strategy to find the important genes and interaction patterns involved in complex diseases

    Rates and risks for prolonged grief disorder in a sample of orphaned and widowed genocide survivors

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The concept of Prolonged Grief Disorder (PGD) has been defined in recent years by Prigerson and co-workers, who have developed and empirically tested consensus and diagnostic criteria for PGD. Using these most recent criteria defining PGD, the aim of this study was to determine rates of and risks for PGD in survivors of the 1994 Rwandan genocide who had lost a parent and/or the husband before, during or after the 1994 events.</p> <p>Methods</p> <p>The PG-13 was administered to 206 orphans or half orphans and to 194 widows. A regression analysis was carried out to examine risk factors of PGD.</p> <p>Results</p> <p>8.0% (<it>n </it>= 32) of the sample met criteria for PGD with an average of 12 years post-loss. All but one person had faced multiple losses and the majority indicated that their grief-related loss was due to violent death (70%). Grief was predicted mainly by time since the loss, by the violent nature of the loss, the severity of symptoms of posttraumatic stress disorder (PTSD) and the importance given to religious/spiritual beliefs. By contrast, gender, age at the time of bereavement, bereavement status (widow versus orphan), the number of different types of losses reported and participation in the funeral ceremony did not impact the severity of prolonged grief reactions.</p> <p>Conclusions</p> <p>A significant portion of the interviewed sample continues to experience grief over interpersonal losses and unresolved grief may endure over time if not addressed by clinical intervention. Severity of grief reactions may be associated with a set of distinct risk factors. Subjects who lose someone through violent death seem to be at special risk as they have to deal with the loss experience as such and the traumatic aspects of the loss. Symptoms of PTSD may hinder the completion of the mourning process. Religious beliefs may facilitate the mourning process and help to find meaning in the loss. These aspects need to be considered in the treatment of PGD.</p

    Proteomic Interrogation of Androgen Action in Prostate Cancer Cells Reveals Roles of Aminoacyl tRNA Synthetases

    Get PDF
    Prostate cancer remains the most common malignancy among men in United States, and there is no remedy currently available for the advanced stage hormone-refractory cancer. This is partly due to the incomplete understanding of androgen-regulated proteins and their encoded functions. Whole-cell proteomes of androgen-starved and androgen-treated LNCaP cells were analyzed by semi-quantitative MudPIT ESI- ion trap MS/MS and quantitative iTRAQ MALDI- TOF MS/MS platforms, with identification of more than 1300 high-confidence proteins. An enrichment-based pathway mapping of the androgen-regulated proteomic data sets revealed a significant dysregulation of aminoacyl tRNA synthetases, indicating an increase in protein biosynthesis- a hallmark during prostate cancer progression. This observation is supported by immunoblot and transcript data from LNCaP cells, and prostate cancer tissue. Thus, data derived from multiple proteomics platforms and transcript data coupled with informatics analysis provides a deeper insight into the functional consequences of androgen action in prostate cancer

    Neural networks for modeling gene-gene interactions in association studies

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Our aim is to investigate the ability of neural networks to model different two-locus disease models. We conduct a simulation study to compare neural networks with two standard methods, namely logistic regression models and multifactor dimensionality reduction. One hundred data sets are generated for each of six two-locus disease models, which are considered in a low and in a high risk scenario. Two models represent independence, one is a multiplicative model, and three models are epistatic. For each data set, six neural networks (with up to five hidden neurons) and five logistic regression models (the null model, three main effect models, and the full model) with two different codings for the genotype information are fitted. Additionally, the multifactor dimensionality reduction approach is applied.</p> <p>Results</p> <p>The results show that neural networks are more successful in modeling the structure of the underlying disease model than logistic regression models in most of the investigated situations. In our simulation study, neither logistic regression nor multifactor dimensionality reduction are able to correctly identify biological interaction.</p> <p>Conclusions</p> <p>Neural networks are a promising tool to handle complex data situations. However, further research is necessary concerning the interpretation of their parameters.</p

    How Large Is the Metabolome? A Critical Analysis of Data Exchange Practices in Chemistry

    Get PDF
    Calculating the metabolome size of species by genome-guided reconstruction of metabolic pathways misses all products from orphan genes and from enzymes lacking annotated genes. Hence, metabolomes need to be determined experimentally. Annotations by mass spectrometry would greatly benefit if peer-reviewed public databases could be queried to compile target lists of structures that already have been reported for a given species. We detail current obstacles to compile such a knowledge base of metabolites.As an example, results are presented for rice. Two rice (oryza sativa) subspecies have been fully sequenced, oryza japonica and oryza indica. Several major small molecule databases were compared for listing known rice metabolites comprising PubChem, Chemical Abstracts, Beilstein, Patent databases, Dictionary of Natural Products, SetupX/BinBase, KNApSAcK DB, and finally those databases which were obtained by computational approaches, i.e. RiceCyc, KEGG, and Reactome. More than 5,000 small molecules were retrieved when searching these databases. Unfortunately, most often, genuine rice metabolites were retrieved together with non-metabolite database entries such as pesticides. Overlaps from database compound lists were very difficult to compare because structures were either not encoded in machine-readable format or because compound identifiers were not cross-referenced between databases.We conclude that present databases are not capable of comprehensively retrieving all known metabolites. Metabolome lists are yet mostly restricted to genome-reconstructed pathways. We suggest that providers of (bio)chemical databases enrich their database identifiers to PubChem IDs and InChIKeys to enable cross-database queries. In addition, peer-reviewed journal repositories need to mandate submission of structures and spectra in machine readable format to allow automated semantic annotation of articles containing chemical structures. Such changes in publication standards and database architectures will enable researchers to compile current knowledge about the metabolome of species, which may extend to derived information such as spectral libraries, organ-specific metabolites, and cross-study comparisons

    Whole-body tissue stabilization and selective extractions via tissue-hydrogel hybrids for high-resolution intact circuit mapping and phenotyping

    Get PDF
    To facilitate fine-scale phenotyping of whole specimens, we describe here a set of tissue fixation-embedding, detergent-clearing and staining protocols that can be used to transform excised organs and whole organisms into optically transparent samples within 1–2 weeks without compromising their cellular architecture or endogenous fluorescence. PACT (passive CLARITY technique) and PARS (perfusion-assisted agent release in situ) use tissue-hydrogel hybrids to stabilize tissue biomolecules during selective lipid extraction, resulting in enhanced clearing efficiency and sample integrity. Furthermore, the macromolecule permeability of PACT- and PARS-processed tissue hybrids supports the diffusion of immunolabels throughout intact tissue, whereas RIMS (refractive index matching solution) grants high-resolution imaging at depth by further reducing light scattering in cleared and uncleared samples alike. These methods are adaptable to difficult-to-image tissues, such as bone (PACT-deCAL), and to magnified single-cell visualization (ePACT). Together, these protocols and solutions enable phenotyping of subcellular components and tracing cellular connectivity in intact biological networks

    Systematic Review of Potential Health Risks Posed by Pharmaceutical, Occupational and Consumer Exposures to Metallic and Nanoscale Aluminum, Aluminum Oxides, Aluminum Hydroxide and Its Soluble Salts

    Get PDF
    Aluminum (Al) is a ubiquitous substance encountered both naturally (as the third most abundant element) and intentionally (used in water, foods, pharmaceuticals, and vaccines); it is also present in ambient and occupational airborne particulates. Existing data underscore the importance of Al physical and chemical forms in relation to its uptake, accumulation, and systemic bioavailability. The present review represents a systematic examination of the peer-reviewed literature on the adverse health effects of Al materials published since a previous critical evaluation compiled by Krewski et al. (2007). Challenges encountered in carrying out the present review reflected the experimental use of different physical and chemical Al forms, different routes of administration, and different target organs in relation to the magnitude, frequency, and duration of exposure. Wide variations in diet can result in Al intakes that are often higher than the World Health Organization provisional tolerable weekly intake (PTWI), which is based on studies with Al citrate. Comparing daily dietary Al exposures on the basis of “total Al”assumes that gastrointestinal bioavailability for all dietary Al forms is equivalent to that for Al citrate, an approach that requires validation. Current occupational exposure limits (OELs) for identical Al substances vary as much as 15-fold. The toxicity of different Al forms depends in large measure on their physical behavior and relative solubility in water. The toxicity of soluble Al forms depends upon the delivered dose of Al+ 3 to target tissues. Trivalent Al reacts with water to produce bidentate superoxide coordination spheres [Al(O2)(H2O4)+ 2 and Al(H2O)6 + 3] that after complexation with O2•−, generate Al superoxides [Al(O2•)](H2O5)]+ 2. Semireduced AlO2• radicals deplete mitochondrial Fe and promote generation of H2O2, O2 • − and OH•. Thus, it is the Al+ 3-induced formation of oxygen radicals that accounts for the oxidative damage that leads to intrinsic apoptosis. In contrast, the toxicity of the insoluble Al oxides depends primarily on their behavior as particulates. Aluminum has been held responsible for human morbidity and mortality, but there is no consistent and convincing evidence to associate the Al found in food and drinking water at the doses and chemical forms presently consumed by people living in North America and Western Europe with increased risk for Alzheimer\u27s disease (AD). Neither is there clear evidence to show use of Al-containing underarm antiperspirants or cosmetics increases the risk of AD or breast cancer. Metallic Al, its oxides, and common Al salts have not been shown to be either genotoxic or carcinogenic. Aluminum exposures during neonatal and pediatric parenteral nutrition (PN) can impair bone mineralization and delay neurological development. Adverse effects to vaccines with Al adjuvants have occurred; however, recent controlled trials found that the immunologic response to certain vaccines with Al adjuvants was no greater, and in some cases less than, that after identical vaccination without Al adjuvants. The scientific literature on the adverse health effects of Al is extensive. Health risk assessments for Al must take into account individual co-factors (e.g., age, renal function, diet, gastric pH). Conclusions from the current review point to the need for refinement of the PTWI, reduction of Al contamination in PN solutions, justification for routine addition of Al to vaccines, and harmonization of OELs for Al substances
    corecore