758 research outputs found

    Bullous pemphigoid and pemphigus vulgaris--incidence and mortality in the UK: population based cohort study.

    Get PDF
    OBJECTIVE: To determine the incidence of and mortality from bullous pemphigoid and pemphigus vulgaris in the United Kingdom. DESIGN: Retrospective historical cohort study. SETTING: Computerised medical records from the health improvement network, a large population based UK general practice database. PARTICIPANTS: Patients with pemphigus vulgaris and bullous pemphigoid diagnostic codes and age, sex, and practice matched controls. MAIN OUTCOME MEASURES: Incidence and mortality compared with the control population by calendar period, age group, sex, geographical region, and degree of social deprivation. RESULTS: 869 people with bullous pemphigoid and 138 people with pemphigus vulgaris were identified. The median age at presentation for bullous pemphigoid was 80 (range 23-102) years, and 534 (61%) patients were female. The median age at presentation for pemphigus vulgaris was 71 (21-102) years, and 91 (66%) patients were female. Incidences of bullous pemphigoid and pemphigus vulgaris were 4.3 (95% confidence interval 4.0 to 4.6) and 0.7 (0.6 to 0.8) per 100 000 person years. The incidence of bullous pemphigoid increased over time; the average yearly increase was 17% (incidence rate ratio=1.2, 95% confidence interval 1.1 to 1.2). An average yearly increase in incidence of pemphigus vulgaris of 11% (incidence rate ratio=1.1, 1.0 to 1.2) occurred. The risk of death for patients with bullous pemphigoid was twice as great as for controls (adjusted hazard ratio=2.3, 95% confidence interval 2.0 to 2.7). For pemphigus vulgaris, the risk of death was three times greater than for controls (adjusted hazard ratio=3.3, 2.2 to 5.2). CONCLUSIONS: Incidences of bullous pemphigoid and pemphigus vulgaris are increasing. The reasons for the changes in incidence are not clearly understood but have implications for identifying causative factors. Both disorders are associated with a high risk of death. Previous estimates may have underestimated the risk of death associated with these diseases

    The use of a bayesian hierarchy to develop and validate a co-morbidity score to predict mortality for linked primary and secondary care data from the NHS in England

    Get PDF
    Background: We have assessed whether the linkage between routine primary and secondary care records provided an opportunity to develop an improved population based co-morbidity score with the combined information on co-morbidities from both health care settings. Methods: We extracted all people older than 20 years at the start of 2005 within the linkage between the Hospital Episodes Statistics, Clinical Practice Research Datalink, and Office for National Statistics death register in England. A random 50% sample was used to identify relevant diagnostic codes using a Bayesian hierarchy to share information between similar Read and ICD 10 code groupings. Internal validation of the score was performed in the remaining 50% and discrimination was assessed using Harrell’s C statistic. Comparisons were made over time, age, and consultation rate with the Charlson and Elixhauser indexes. Results: 657,264 people were followed up from the 1st January 2005. 98 groupings of codes were derived from the Bayesian hierarchy, and 37 had an adjusted weighting of greater than zero in the Cox proportional hazards model. 11 of these groupings had a different weighting dependent on whether they were coded from hospital or primary care. The C statistic reduced from 0.88 (95% confidence interval 0.88–0.88) in the first year of follow up, to 0.85 (0.85–0.85) including all 5 years. When we stratified the linked score by consultation rate the association with mortality remained consistent, but there was a significant interaction with age, with improved discrimination and fit in those under 50 years old (C=0.85, 0.83–0.87) compared to the Charlson (C=0.79, 0.77–0.82) or Elixhauser index (C=0.81, 0.79–0.83). Conclusions: The use of linked population based primary and secondary care data developed a co-morbidity score that had improved discrimination, particularly in younger age groups, and had a greater effect when adjusting for co-morbidity than existing scores

    Microsomal epoxide hydrolase gene polymorphism and susceptibility to colon cancer

    Get PDF
    We examined polymorphisms in exons 3 and 4 of microsomal epoxide hydrolase in 101 patients with colon cancer and compared the results with 203 control samples. The frequency of the exon 3 T to C mutation was higher in cancer patients than in controls (odds ratio 3.8; 95% confidence intervals 1.8–8.0). This sequence alteration changes tyrosine residue 113 to histidine and is associated with lower enzyme activity when expressed in vitro. This suggests that putative slow epoxide hydrolase activity may be a risk factor for colon cancer. This appears to be true for both right- and left-sided tumours, but was more apparent for tumours arising distally (odds ratio 4.1; 95% confidence limits 1.9–9.2). By contrast, there was no difference in prevalence of exon 4 A to G transition mutation in cancer vs controls. This mutation changes histidine residue 139 to arginine and produces increased enzyme activity. There was no association between epoxide hydrolase genotype and abnormalities of p53 or Ki- Ras. © 1999 Cancer Research Campaig

    Critical research gaps and translational priorities for the successful prevention and treatment of breast cancer

    Get PDF
    INTRODUCTION Breast cancer remains a significant scientific, clinical and societal challenge. This gap analysis has reviewed and critically assessed enduring issues and new challenges emerging from recent research, and proposes strategies for translating solutions into practice. METHODS More than 100 internationally recognised specialist breast cancer scientists, clinicians and healthcare professionals collaborated to address nine thematic areas: genetics, epigenetics and epidemiology; molecular pathology and cell biology; hormonal influences and endocrine therapy; imaging, detection and screening; current/novel therapies and biomarkers; drug resistance; metastasis, angiogenesis, circulating tumour cells, cancer 'stem' cells; risk and prevention; living with and managing breast cancer and its treatment. The groups developed summary papers through an iterative process which, following further appraisal from experts and patients, were melded into this summary account. RESULTS The 10 major gaps identified were: (1) understanding the functions and contextual interactions of genetic and epigenetic changes in normal breast development and during malignant transformation; (2) how to implement sustainable lifestyle changes (diet, exercise and weight) and chemopreventive strategies; (3) the need for tailored screening approaches including clinically actionable tests; (4) enhancing knowledge of molecular drivers behind breast cancer subtypes, progression and metastasis; (5) understanding the molecular mechanisms of tumour heterogeneity, dormancy, de novo or acquired resistance and how to target key nodes in these dynamic processes; (6) developing validated markers for chemosensitivity and radiosensitivity; (7) understanding the optimal duration, sequencing and rational combinations of treatment for improved personalised therapy; (8) validating multimodality imaging biomarkers for minimally invasive diagnosis and monitoring of responses in primary and metastatic disease; (9) developing interventions and support to improve the survivorship experience; (10) a continuing need for clinical material for translational research derived from normal breast, blood, primary, relapsed, metastatic and drug-resistant cancers with expert bioinformatics support to maximise its utility. The proposed infrastructural enablers include enhanced resources to support clinically relevant in vitro and in vivo tumour models; improved access to appropriate, fully annotated clinical samples; extended biomarker discovery, validation and standardisation; and facilitated cross-discipline working. CONCLUSIONS With resources to conduct further high-quality targeted research focusing on the gaps identified, increased knowledge translating into improved clinical care should be achievable within five years

    Formalization of taxon-based constraints to detect inconsistencies in annotation and ontology development

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The Gene Ontology project supports categorization of gene products according to their location of action, the molecular functions that they carry out, and the processes that they are involved in. Although the ontologies are intentionally developed to be taxon neutral, and to cover all species, there are inherent taxon specificities in some branches. For example, the process 'lactation' is specific to mammals and the location 'mitochondrion' is specific to eukaryotes. The lack of an explicit formalization of these constraints can lead to errors and inconsistencies in automated and manual annotation.</p> <p>Results</p> <p>We have formalized the taxonomic constraints implicit in some GO classes, and specified these at various levels in the ontology. We have also developed an inference system that can be used to check for violations of these constraints in annotations. Using the constraints in conjunction with the inference system, we have detected and removed errors in annotations and improved the structure of the ontology.</p> <p>Conclusions</p> <p>Detection of inconsistencies in taxon-specificity enables gradual improvement of the ontologies, the annotations, and the formalized constraints. This is progressively improving the quality of our data. The full system is available for download, and new constraints or proposed changes to constraints can be submitted online at <url>https://sourceforge.net/tracker/?atid=605890&group_id=36855</url>.</p

    Two-Particle-Self-Consistent Approach for the Hubbard Model

    Full text link
    Even at weak to intermediate coupling, the Hubbard model poses a formidable challenge. In two dimensions in particular, standard methods such as the Random Phase Approximation are no longer valid since they predict a finite temperature antiferromagnetic phase transition prohibited by the Mermin-Wagner theorem. The Two-Particle-Self-Consistent (TPSC) approach satisfies that theorem as well as particle conservation, the Pauli principle, the local moment and local charge sum rules. The self-energy formula does not assume a Migdal theorem. There is consistency between one- and two-particle quantities. Internal accuracy checks allow one to test the limits of validity of TPSC. Here I present a pedagogical review of TPSC along with a short summary of existing results and two case studies: a) the opening of a pseudogap in two dimensions when the correlation length is larger than the thermal de Broglie wavelength, and b) the conditions for the appearance of d-wave superconductivity in the two-dimensional Hubbard model.Comment: Chapter in "Theoretical methods for Strongly Correlated Systems", Edited by A. Avella and F. Mancini, Springer Verlag, (2011) 55 pages. Misprint in Eq.(23) corrected (thanks D. Bergeron

    Layered convection as the origin of Saturn's luminosity anomaly

    Get PDF
    As they keep cooling and contracting, Solar System giant planets radiate more energy than they receive from the Sun. Applying the first and second principles of thermodynamics, one can determine their cooling rate, luminosity, and temperature at a given age. Measurements of Saturn's infrared intrinsic luminosity, however, reveal that this planet is significantly brighter than predicted for its age. This excess luminosity is usually attributed to the immiscibility of helium in the hydrogen-rich envelope, leading to "rains" of helium-rich droplets. Existing evolution calculations, however, suggest that the energy released by this sedimentation process may not be sufficient to resolve the puzzle. Here, we demonstrate using planetary evolution models that the presence of layered convection in Saturn's interior, generated, like in some parts of Earth oceans, by the presence of a compositional gradient, significantly reduces its cooling. It can explain the planet's present luminosity for a wide range of configurations without invoking any additional source of energy. This suggests a revision of the conventional homogeneous adiabatic interior paradigm for giant planets, and questions our ability to assess their heavy element content. This reinforces the possibility for layered convection to help explaining the anomalously large observed radii of extrasolar giant planets.Comment: Published in Nature Geoscience. Online publication date: April 21st, 2013. Accepted version before journal editing and with Supplementary Informatio

    Functional Annotation and Identification of Candidate Disease Genes by Computational Analysis of Normal Tissue Gene Expression Data

    Get PDF
    Background: High-throughput gene expression data can predict gene function through the ‘‘guilt by association’ ’ principle: coexpressed genes are likely to be functionally associated. Methodology/Principal Findings: We analyzed publicly available expression data on normal human tissues. The analysis is based on the integration of data obtained with two experimental platforms (microarrays and SAGE) and of various measures of dissimilarity between expression profiles. The building blocks of the procedure are the Ranked Coexpression Groups (RCG), small sets of tightly coexpressed genes which are analyzed in terms of functional annotation. Functionally characterized RCGs are selected by means of the majority rule and used to predict new functional annotations. Functionally characterized RCGs are enriched in groups of genes associated to similar phenotypes. We exploit this fact to find new candidate disease genes for many OMIM phenotypes of unknown molecular origin. Conclusions/Significance: We predict new functional annotations for many human genes, showing that the integration of different data sets and coexpression measures significantly improves the scope of the results. Combining gene expression data, functional annotation and known phenotype-gene associations we provide candidate genes for several geneti

    Using Workflows to Explore and Optimise Named Entity Recognition for Chemistry

    Get PDF
    Chemistry text mining tools should be interoperable and adaptable regardless of system-level implementation, installation or even programming issues. We aim to abstract the functionality of these tools from the underlying implementation via reconfigurable workflows for automatically identifying chemical names. To achieve this, we refactored an established named entity recogniser (in the chemistry domain), OSCAR and studied the impact of each component on the net performance. We developed two reconfigurable workflows from OSCAR using an interoperable text mining framework, U-Compare. These workflows can be altered using the drag-&-drop mechanism of the graphical user interface of U-Compare. These workflows also provide a platform to study the relationship between text mining components such as tokenisation and named entity recognition (using maximum entropy Markov model (MEMM) and pattern recognition based classifiers). Results indicate that, for chemistry in particular, eliminating noise generated by tokenisation techniques lead to a slightly better performance than others, in terms of named entity recognition (NER) accuracy. Poor tokenisation translates into poorer input to the classifier components which in turn leads to an increase in Type I or Type II errors, thus, lowering the overall performance. On the Sciborg corpus, the workflow based system, which uses a new tokeniser whilst retaining the same MEMM component, increases the F-score from 82.35% to 84.44%. On the PubMed corpus, it recorded an F-score of 84.84% as against 84.23% by OSCAR

    Structural Similarity and Classification of Protein Interaction Interfaces

    Get PDF
    Interactions between proteins play a key role in many cellular processes. Studying protein-protein interactions that share similar interaction interfaces may shed light on their evolution and could be helpful in elucidating the mechanisms behind stability and dynamics of the protein complexes. When two complexes share structurally similar subunits, the similarity of the interaction interfaces can be found through a structural superposition of the subunits. However, an accurate detection of similarity between the protein complexes containing subunits of unrelated structure remains an open problem
    corecore