2,015 research outputs found

    Assessing the Health of Richibucto Estuary with the Latent Health Factor Index

    Get PDF
    The ability to quantitatively assess the health of an ecosystem is often of great interest to those tasked with monitoring and conserving ecosystems. For decades, research in this area has relied upon multimetric indices of various forms. Although indices may be numbers, many are constructed based on procedures that are highly qualitative in nature, thus limiting the quantitative rigour of the practical interpretations made from these indices. The statistical modelling approach to construct the latent health factor index (LHFI) was recently developed to express ecological data, collected to construct conventional multimetric health indices, in a rigorous quantitative model that integrates qualitative features of ecosystem health and preconceived ecological relationships among such features. This hierarchical modelling approach allows (a) statistical inference of health for observed sites and (b) prediction of health for unobserved sites, all accompanied by formal uncertainty statements. Thus far, the LHFI approach has been demonstrated and validated on freshwater ecosystems. The goal of this paper is to adapt this approach to modelling estuarine ecosystem health, particularly that of the previously unassessed system in Richibucto in New Brunswick, Canada. Field data correspond to biotic health metrics that constitute the AZTI marine biotic index (AMBI) and abiotic predictors preconceived to influence biota. We also briefly discuss related LHFI research involving additional metrics that form the infaunal trophic index (ITI). Our paper is the first to construct a scientifically sensible model to rigorously identify the collective explanatory capacity of salinity, distance downstream, channel depth, and silt-clay content --- all regarded a priori as qualitatively important abiotic drivers --- towards site health in the Richibucto ecosystem.Comment: On 2013-05-01, a revised version of this article was accepted for publication in PLoS One. See Journal reference and DOI belo

    Estimation of colorectal adenoma recurrence with dependent censoring

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Due to early colonoscopy for some participants, interval-censored observations can be introduced into the data of a colorectal polyp prevention trial. The censoring could be dependent of risk of recurrence if the reasons of having early colonoscopy are associated with recurrence. This can complicate estimation of the recurrence rate.</p> <p>Methods</p> <p>We propose to use midpoint imputation to convert interval-censored data problems to right censored data problems. To adjust for potential dependent censoring, we use information from auxiliary variables to define risk groups to perform the weighted Kaplan-Meier estimation to the midpoint imputed data. The risk groups are defined using two risk scores derived from two working proportional hazards models with the auxiliary variables as the covariates. One is for the recurrence time and the other is for the censoring time. The method described here is explored by simulation and illustrated with an example from a colorectal polyp prevention trial.</p> <p>Results</p> <p>We first show that midpoint imputation under an assumption of independent censoring will produce an unbiased estimate of recurrence rate at the end of the trial, which is often the main interest of a colorectal polyp prevention trial, and then show in simulations that the weighted Kaplan-Meier method using the information from auxiliary variables based on the midpoint imputed data can improve efficiency in a situation with independent censoring and reduce bias in a situation with dependent censoring compared to the conventional methods, while estimating the recurrence rate at the end of the trial.</p> <p>Conclusion</p> <p>The research in this paper uses midpoint imputation to handle interval-censored observations and then uses the information from auxiliary variables to adjust for dependent censoring by incorporating them into the weighted Kaplan-Meier estimation. This approach can handle a situation with multiple auxiliary variables by deriving two risk scores from two working PH models. Although the idea of this approach might appear simple, the results do show that the weighted Kaplan-Meier approach can gain efficiency and reduce bias due to dependent censoring.</p

    Evolution of the mammalian lysozyme gene family

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Lysozyme <it>c </it>(chicken-type lysozyme) has an important role in host defense, and has been extensively studied as a model in molecular biology, enzymology, protein chemistry, and crystallography. Traditionally, lysozyme <it>c </it>has been considered to be part of a small family that includes genes for two other proteins, lactalbumin, which is found only in mammals, and calcium-binding lysozyme, which is found in only a few species of birds and mammals. More recently, additional testes-expressed members of this family have been identified in human and mouse, suggesting that the mammalian lysozyme gene family is larger than previously known.</p> <p>Results</p> <p>Here we characterize the extent and diversity of the lysozyme gene family in the genomes of phylogenetically diverse mammals, and show that this family contains at least eight different genes that likely duplicated prior to the diversification of extant mammals. These duplicated genes have largely been maintained, both in intron-exon structure and in genomic context, throughout mammalian evolution.</p> <p>Conclusions</p> <p>The mammalian lysozyme gene family is much larger than previously appreciated and consists of at least eight distinct genes scattered around the genome. Since the lysozyme <it>c </it>and lactalbumin proteins have acquired very different functions during evolution, it is likely that many of the other members of the lysozyme-like family will also have diverse and unexpected biological properties.</p

    Interleukin-1 polymorphisms associated with increased risk of gastric cancer

    Get PDF
    Helicobacter pylori infection is associated with a variety of clinical outcomes including gastric cancer and duodenal ulcer disease. The reasons for this variation are not clear, but the gastric physiological response is influenced by the severity and anatomical distribution of gastritis induced by H. pylori. Thus, individuals with gastritis predominantly localized to the antrum retain normal (or even high) acid secretion, whereas individuals with extensive corpus gastritis develop hypochlorhydria and gastric atrophy, which are presumptive precursors of gastric cancer. Here we report that interleukin-1 gene cluster polymorphisms suspected of enhancing production of interleukin-1-beta are associated with an increased risk of both hypochlorhydria induced by H. pylori and gastric cancer. Two of these polymorphism are in near-complete linkage disequilibrium and one is a TATA-box polymorphism that markedly affects DNA-protein interactions in vitro. The association with disease may be explained by the biological properties of interleukin-1-beta, which is an important pro-inflammatory cytokine and a powerful inhibitor of gastric acid secretion. Host genetic factors that affect interleukin-1-beta may determine why some individuals infected with H. pylori develop gastric cancer while others do no

    A methodology for automatic classification of breast cancer immunohistochemical data using semi-supervised fuzzy c-means

    Get PDF
    Previously, a semi-manual method was used to identify six novel and clinically useful classes in the Nottingham Tenovus Breast Cancer dataset. 663 out of 1,076 patients were classified. The objectives of our work is three folds. Firstly, our primary objective is to use one single automatic method (post-initialisation) to reproduce the six classes for the 663 patients and to classify the remaining 413 patients. Secondly, we explore using semi-supervised fuzzy c-means with various distance metrics and initialisation techniques to achieve this. Thirdly, the clinical characteristics of the 413 patients are examined by comparing with the 663 patients. Our experiments use various amount of labelled data and 10-fold cross validation to reproduce and evaluate the classification. ssFCM with Euclidean distance and initialisation technique by Katsavounidis et al. produced the best results. It is then used to classify the 413 patients. Visual evaluation of the 413 patients’ classifications revealed common characteristics as those previously reported. Examination of clinical characteristics indicates significant associations between classification and clinical parameters. More importantly, association between classification and survival based on the survival curves is shown

    Dimensionality of Carbon Nanomaterials Determines the Binding and Dynamics of Amyloidogenic Peptides: Multiscale Theoretical Simulations

    Get PDF
    Experimental studies have demonstrated that nanoparticles can affect the rate of protein self-assembly, possibly interfering with the development of protein misfolding diseases such as Alzheimer's, Parkinson's and prion disease caused by aggregation and fibril formation of amyloid-prone proteins. We employ classical molecular dynamics simulations and large-scale density functional theory calculations to investigate the effects of nanomaterials on the structure, dynamics and binding of an amyloidogenic peptide apoC-II(60-70). We show that the binding affinity of this peptide to carbonaceous nanomaterials such as C60, nanotubes and graphene decreases with increasing nanoparticle curvature. Strong binding is facilitated by the large contact area available for π-stacking between the aromatic residues of the peptide and the extended surfaces of graphene and the nanotube. The highly curved fullerene surface exhibits reduced efficiency for π-stacking but promotes increased peptide dynamics. We postulate that the increase in conformational dynamics of the amyloid peptide can be unfavorable for the formation of fibril competent structures. In contrast, extended fibril forming peptide conformations are promoted by the nanotube and graphene surfaces which can provide a template for fibril-growth

    Photographed Rapid HIV Test Results Pilot Novel Quality Assessment and Training Schemes

    Get PDF
    HIV rapid diagnostic tests (RDTs) are now used widely in non-laboratory settings by non-laboratory-trained operators. Quality assurance programmes are essential in ensuring the quality of HIV RDT outcomes. However, there is no cost-effective means of supplying the many operators of RDTs with suitable quality assurance schemes. Therefore, it was examined whether photograph-based RDT results could be used and correctly interpreted in the non-laboratory setting. Further it was investigated if a single training session improved the interpretation skills of RDT operators. The photographs were interpreted, a 10-minute tutorial given and then a second interpretation session was held. It was established that the results could be read with accuracy. The participants (n = 75) with a range of skills interpreted results (>80% concordance with reference results) from a panel of 10 samples (three negative and seven positive) using four RDTs. Differences in accuracy of interpretation before and after the tutorial were marked in some cases. Training was more effective for improving the accurate interpretation of more complex results, e.g. results with faint test lines or for multiple test lines, and especially for improving interpretation skills of inexperienced participants. It was demonstrated that interpretation of RDTs was improved using photographed results allied to a 10-minute training session. It is anticipated that this method could be used for training but also for quality assessment of RDT operators without access to conventional quality assurance or training schemes requiring wet samples
    • …
    corecore