165 research outputs found

    HALT (Hernia Active Living Trial): protocol for a feasibility study of a randomised controlled trial of a physical activity intervention to improve quality of life in people with bowel stoma with a bulge/parastomal hernia

    Get PDF
    Background Parastomal hernia (PSH) can be repaired surgically, but results to date have been disappointing, with reported recurrence rates of 30 to 76%. Other types of intervention are therefore needed to improve the quality of life of people with PSH. One potential intervention is physical activity. We hypothesise that the intervention will increase core activation and control across the abdominal wall at a site of potential weakness and thus reduce the risk of PSH progression. Increases in physical activity will improve body image and quality of life (QoL). Methods Subjects and sample There were approximately 20 adults with a bowel stoma and PSH. People with previous PSH repair will be excluded as well as people who already do core training. Study design This is a feasibility study of a randomised controlled trial with 2 months follow-up, in 2 sites using mixed methods. Stage 1 involves intervention development and in stage 2, intervention and trial parameters will be assessed. Intervention A theoretically informed physical activity intervention was done, targeting people with PSH. Main outcome of feasibility study The main outcome is the decision by an independent Study Steering Committee whether to proceed to a full randomised controlled trial of the intervention. Other outcomes We will evaluate 4 intervention parameters—fidelity, adherence, acceptability and safety and 3 trial parameters (eligible patients’ consent rate, acceptability of study design and data availability rates for following endpoints): I. Diagnosis and classification of PSH II. Muscle activation III. Body composition (BMI, waist circumference) IV. Patient reported outcomes: QoL, body image and physical functioning V. Physical activity; VI. Psychological determinants of physical activity Other data Included are other data such as interviews with all participants about the intervention and trial procedures. Data analysis and statistical power As this is a feasibility study, the quantitative data will be analysed using descriptive statistics. Audio-recorded qualitative data from interviews will be transcribed verbatim and analysed thematically. Discussion The feasibility and acceptability of key intervention and trial parameters will be used to decide whether to proceed to a full trial of the intervention, which aims to improve body image, quality of life and PSH progression. Trial registration ISRCTN1520759

    PathEx: a novel multi factors based datasets selector web tool

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Microarray experiments have become very popular in life science research. However, if such experiments are only considered independently, the possibilities for analysis and interpretation of many life science phenomena are reduced. The accumulation of publicly available data provides biomedical researchers with a valuable opportunity to either discover new phenomena or improve the interpretation and validation of other phenomena that partially understood or well known. This can only be achieved by intelligently exploiting this rich mine of information.</p> <p>Description</p> <p>Considering that technologies like microarrays remain prohibitively expensive for researchers with limited means to order their own experimental chips, it would be beneficial to re-use previously published microarray data. For certain researchers interested in finding gene groups (requiring many replicates), there is a great need for tools to help them to select appropriate datasets for analysis. These tools may be effective, if and only if, they are able to re-use previously deposited experiments or to create new experiments not initially envisioned by the depositors. However, the generation of new experiments requires that all published microarray data be completely annotated, which is not currently the case. Thus, we propose the PathEx approach.</p> <p>Conclusion</p> <p>This paper presents PathEx, a human-focused web solution built around a two-component system: one database component, enriched with relevant biological information (expression array, omics data, literature) from different sources, and another component comprising sophisticated web interfaces that allow users to perform complex dataset building queries on the contents integrated into the PathEx database.</p

    Cinteny: flexible analysis and visualization of synteny and genome rearrangements in multiple organisms

    Get PDF
    BACKGROUND: Identifying syntenic regions, i.e., blocks of genes or other markers with evolutionary conserved order, and quantifying evolutionary relatedness between genomes in terms of chromosomal rearrangements is one of the central goals in comparative genomics. However, the analysis of synteny and the resulting assessment of genome rearrangements are sensitive to the choice of a number of arbitrary parameters that affect the detection of synteny blocks. In particular, the choice of a set of markers and the effect of different aggregation strategies, which enable coarse graining of synteny blocks and exclusion of micro-rearrangements, need to be assessed. Therefore, existing tools and resources that facilitate identification, visualization and analysis of synteny need to be further improved to provide a flexible platform for such analysis, especially in the context of multiple genomes. RESULTS: We present a new tool, Cinteny, for fast identification and analysis of synteny with different sets of markers and various levels of coarse graining of syntenic blocks. Using Hannenhalli-Pevzner approach and its extensions, Cinteny also enables interactive determination of evolutionary relationships between genomes in terms of the number of rearrangements (the reversal distance). In particular, Cinteny provides: i) integration of synteny browsing with assessment of evolutionary distances for multiple genomes; ii) flexibility to adjust the parameters and re-compute the results on-the-fly; iii) ability to work with user provided data, such as orthologous genes, sequence tags or other conserved markers. In addition, Cinteny provides many annotated mammalian, invertebrate and fungal genomes that are pre-loaded and available for analysis at . CONCLUSION: Cinteny allows one to automatically compare multiple genomes and perform sensitivity analysis for synteny block detection and for the subsequent computation of reversal distances. Cinteny can also be used to interactively browse syntenic blocks conserved in multiple genomes, to facilitate genome annotation and validation of assemblies for newly sequenced genomes, and to construct and assess phylogenomic trees

    Experiences of wearing support garments by people living with a urostomy

    Get PDF
    BACKGROUND:support garments are commonly worn by people with a urostomy but there are no published data about their experiences of doing so. AIMS:to identify the views of people living with a urostomy on the role of support garments. METHODS:a cross-sectional survey of the stoma population's experiences of support garments was conducted in 2018. Recruitment was by social media. The free-text responses provided by a sub-sample of 58 people out of 103 respondents with a urostomy, were analysed. FINDINGS:thematic analysis revealed four themes: physical self-management; psychosocial self-management; lifestyle; and healthcare advice and support. There were mixed feelings about the value of support garments. Many cited a sense of reassurance and confidence and being able to be more sociable and active; others reported discomfort and uncertainty about their value. CONCLUSION:these findings add new understanding of experiences of support garments and provide novel theoretical insights about life with a urostomy

    EVEREST: automatic identification and classification of protein domains in all protein sequences

    Get PDF
    BACKGROUND: Proteins are comprised of one or several building blocks, known as domains. Such domains can be classified into families according to their evolutionary origin. Whereas sequencing technologies have advanced immensely in recent years, there are no matching computational methodologies for large-scale determination of protein domains and their boundaries. We provide and rigorously evaluate a novel set of domain families that is automatically generated from sequence data. Our domain family identification process, called EVEREST (EVolutionary Ensembles of REcurrent SegmenTs), begins by constructing a library of protein segments that emerge in an all vs. all pairwise sequence comparison. It then proceeds to cluster these segments into putative domain families. The selection of the best putative families is done using machine learning techniques. A statistical model is then created for each of the chosen families. This procedure is then iterated: the aforementioned statistical models are used to scan all protein sequences, to recreate a library of segments and to cluster them again. RESULTS: Processing the Swiss-Prot section of the UniProt Knoledgebase, release 7.2, EVEREST defines 20,230 domains, covering 85% of the amino acids of the Swiss-Prot database. EVEREST annotates 11,852 proteins (6% of the database) that are not annotated by Pfam A. In addition, in 43,086 proteins (20% of the database), EVEREST annotates a part of the protein that is not annotated by Pfam A. Performance tests show that EVEREST recovers 56% of Pfam A families and 63% of SCOP families with high accuracy, and suggests previously unknown domain families with at least 51% fidelity. EVEREST domains are often a combination of domains as defined by Pfam or SCOP and are frequently sub-domains of such domains. CONCLUSION: The EVEREST process and its output domain families provide an exhaustive and validated view of the protein domain world that is automatically generated from sequence data. The EVEREST library of domain families, accessible for browsing and download at [1], provides a complementary view to that provided by other existing libraries. Furthermore, since it is automatic, the EVEREST process is scalable and we will run it in the future on larger databases as well. The EVEREST source files are available for download from the EVEREST web site

    Murasaki: A Fast, Parallelizable Algorithm to Find Anchors from Multiple Genomes

    Get PDF
    BACKGROUND: With the number of available genome sequences increasing rapidly, the magnitude of sequence data required for multiple-genome analyses is a challenging problem. When large-scale rearrangements break the collinearity of gene orders among genomes, genome comparison algorithms must first identify sets of short well-conserved sequences present in each genome, termed anchors. Previously, anchor identification among multiple genomes has been achieved using pairwise alignment tools like BLASTZ through progressive alignment tools like TBA, but the computational requirements for sequence comparisons of multiple genomes quickly becomes a limiting factor as the number and scale of genomes grows. METHODOLOGY/PRINCIPAL FINDINGS: Our algorithm, named Murasaki, makes it possible to identify anchors within multiple large sequences on the scale of several hundred megabases in few minutes using a single CPU. Two advanced features of Murasaki are (1) adaptive hash function generation, which enables efficient use of arbitrary mismatch patterns (spaced seeds) and therefore the comparison of multiple mammalian genomes in a practical amount of computation time, and (2) parallelizable execution that decreases the required wall-clock and CPU times. Murasaki can perform a sensitive anchoring of eight mammalian genomes (human, chimp, rhesus, orangutan, mouse, rat, dog, and cow) in 21 hours CPU time (42 minutes wall time). This is the first single-pass in-core anchoring of multiple mammalian genomes. We evaluated Murasaki by comparing it with the genome alignment programs BLASTZ and TBA. We show that Murasaki can anchor multiple genomes in near linear time, compared to the quadratic time requirements of BLASTZ and TBA, while improving overall accuracy. CONCLUSIONS/SIGNIFICANCE: Murasaki provides an open source platform to take advantage of long patterns, cluster computing, and novel hash algorithms to produce accurate anchors across multiple genomes with computational efficiency significantly greater than existing methods. Murasaki is available under GPL at http://murasaki.sourceforge.net

    Chronic Exposure of Corals to Fine Sediments: Lethal and Sub-Lethal Impacts

    Get PDF
    Understanding the sedimentation and turbidity thresholds for corals is critical in assessing the potential impacts of dredging projects in tropical marine systems. In this study, we exposed two species of coral sampled from offshore locations to six levels of total suspended solids (TSS) for 16 weeks in the laboratory, including a 4 week recovery period. Dose-response relationships were developed to quantify the lethal and sub-lethal thresholds of sedimentation and turbidity for the corals. The sediment treatments affected the horizontal foliaceous species (Montipora aequituberculata) more than the upright branching species (Acropora millepora). The lowest sediment treatments that caused full colony mortality were 30 mg l−1 TSS (25 mg cm−2 day−1) for M. aequituberculata and 100 mg l−1 TSS (83 mg cm−2 day−1) for A. millepora after 12 weeks. Coral mortality generally took longer than 4 weeks and was closely related to sediment accumulation on the surface of the corals. While measurements of damage to photosystem II in the symbionts and reductions in lipid content and growth indicated sub-lethal responses in surviving corals, the most reliable predictor of coral mortality in this experiment was long-term sediment accumulation on coral tissue

    The size-brightness correspondence:evidence for crosstalk among aligned conceptual feature dimensions

    Get PDF
    The same core set of cross-sensory correspondences connecting stimulus features across different sensory channels are observed regardless of the modality of the stimulus with which the correspondences are probed. This observation suggests that correspondences involve modality-independent representations of aligned conceptual feature dimensions, and predicts a size-brightness correspondence, in which smaller is aligned with brighter. This suggestion accommodates cross-sensory congruity effects where contrasting feature values are specified verbally rather than perceptually (e.g., where the words WHITE and BLACK interact with the classification of high and low pitch sounds). Experiment 1 brings these two issues together in assessing a conceptual basis for correspondences. The names of bright/white and dark/black substances were presented in a speeded brightness classification task in which the two alternative response keys differed in size. A size-brightness congruity effect was confirmed, with substance names classified more quickly when the relative size of the response key needing to be pressed was congruent with the brightness of the named substance (e.g., when yoghurt was classified as a bright substance by pressing the smaller of two keys). Experiment 2 assesses the proposed conceptual basis for this congruity effect by requiring the same named substances to be classified according to their edibility (with all of the bright/dark substances having been selected for their edibility/inedibility, respectively). The predicted absence of a size-brightness congruity effect, along with other aspects of the results, supports the proposed conceptual basis for correspondences and speaks against accounts in which modality-specific perceptuomotor representations are entirely responsible for correspondence-induced congruity effects

    Fast Homozygosity Mapping and Identification of a Zebrafish ENU-Induced Mutation by Whole-Genome Sequencing

    Get PDF
    Forward genetics using zebrafish is a powerful tool for studying vertebrate development through large-scale mutagenesis. Nonetheless, the identification of the molecular lesion is still laborious and involves time-consuming genetic mapping. Here, we show that high-throughput sequencing of the whole zebrafish genome can directly locate the interval carrying the causative mutation and at the same time pinpoint the molecular lesion. The feasibility of this approach was validated by sequencing the m1045 mutant line that displays a severe hypoplasia of the exocrine pancreas. We generated 13 Gb of sequence, equivalent to an eightfold genomic coverage, from a pool of 50 mutant embryos obtained from a map-cross between the AB mutant carrier and the WIK polymorphic strain. The chromosomal region carrying the causal mutation was localized based on its unique property to display high levels of homozygosity among sequence reads as it derives exclusively from the initial AB mutated allele. We developed an algorithm identifying such a region by calculating a homozygosity score along all chromosomes. This highlighted an 8-Mb window on chromosome 5 with a score close to 1 in the m1045 mutants. The sequence analysis of all genes within this interval revealed a nonsense mutation in the snapc4 gene. Knockdown experiments confirmed the assertion that snapc4 is the gene whose mutation leads to exocrine pancreas hypoplasia. In conclusion, this study constitutes a proof-of-concept that whole-genome sequencing is a fast and effective alternative to the classical positional cloning strategies in zebrafish

    How reliably can we predict the reliability of protein structure predictions?

    Get PDF
    Background: Comparative methods have been the standard techniques for in silico protein structure prediction. The prediction is based on a multiple alignment that contains both reference sequences with known structures and the sequence whose unknown structure is predicted. Intensive research has been made to improve the quality of multiple alignments, since misaligned parts of the multiple alignment yield misleading predictions. However, sometimes all methods fail to predict the correct alignment, because the evolutionary signal is too weak to find the homologous parts due to the large number of mutations that separate the sequences. Results: Stochastic sequence alignment methods define a posterior distribution of possible multiple alignments. They can highlight the most likely alignment, and above that, they can give posterior probabilities for each alignment column. We made a comprehensive study on the HOMSTRAD database of structural alignments, predicting secondary structures in four different ways. We showed that alignment posterior probabilities correlate with the reliability of secondary structure predictions, though the strength of the correlation is different for different protocols. The correspondence between the reliability of secondary structure predictions and alignment posterior probabilities is the closest to the identity function when the secondary structure posterior probabilities are calculated from the posterior distribution of multiple alignments. The largest deviation from the identity function has been obtained in the case of predicting secondary structures from a single optimal pairwise alignment. We also showed that alignment posterior probabilities correlate with the 3D distances between C α amino acids in superimposed tertiary structures. Conclusion: Alignment posterior probabilities can be used to a priori detect errors in comparative models on the sequence alignment level. </p
    corecore