231 research outputs found

    Reduction in the risk of human breast cancer by selective cyclooxygenase-2 (COX-2) inhibitors

    Get PDF
    BACKGROUND: Epidemiologic and laboratory investigations suggest that nonsteroidal anti-inflammatory drugs (NSAIDs) have chemopreventive effects against breast cancer due to their activity against cyclooxygenase-2 (COX-2), the rate-limiting enzyme of the prostaglandin cascade. METHODS: We conducted a case control study of breast cancer designed to compare effects of selective and non-selective COX-2 inhibitors. A total of 323 incident breast cancer patients were ascertained from the James Cancer Hospital, Columbus, Ohio, during 2003–2004 and compared with 649 cancer free controls matched to the cases at a 2:1 ratio on age, race, and county of residence. Data on the past and current use of prescription and over the counter medications and breast cancer risk factors were ascertained using a standardized risk factor questionnaire. Effects of COX-2 inhibiting agents were quantified by calculating odds ratios (OR) and 95% confidence intervals. RESULTS: Results showed significant risk reductions for selective COX-2 inhibitors as a group (OR = 0.29, 95% CI = 0.14–0.59), regular aspirin (OR = 0.49, 95% CI = 0.26–0.94), and ibuprofen or naproxen (0.36, 95% CI = 0.18–0.72). Acetaminophen, a compound with negligible COX-2 activity and low dose aspirin (81 mg) produced no significant change in the risk of breast cancer. CONCLUSION: Selective COX-2 inhibitors (celecoxib and rofecoxib) were only recently approved for use in 1999, and rofecoxib (Vioxx) was withdrawn from the marketplace in 2004. Nevertheless, even in the short window of exposure to these compounds, the selective COX-2 inhibitors produced a significant (71%) reduction in the risk of breast cancer, underscoring their strong potential for breast cancer chemoprevention

    Introduced Pathogens and Native Freshwater Biodiversity: A Case Study of Sphaerothecum destruens

    Get PDF
    A recent threat to European fish diversity was attributed to the association between an intracellular parasite, Sphaerothecum destruens, and a healthy freshwater fish carrier, the invasive Pseudorasbora parva originating from China. The pathogen was found to be responsible for the decline and local extinction of the European endangered cyprinid Leucaspius delineatus and high mortalities in stocks of Chinook and Atlantic salmon in the USA. Here, we show that the emerging S. destruens is also a threat to a wider range of freshwater fish than originally suspected such as bream, common carp, and roach. This is a true generalist as an analysis of susceptible hosts shows that S. destruens is not limited to a phylogenetically narrow host spectrum. This disease agent is a threat to fish biodiversity as it can amplify within multiple hosts and cause high mortalities

    Modelling human performance within manufacturing systems design:from a theoretical towards a practical framework

    Get PDF
    Computer-based simulation is frequently used to evaluate the capabilities of proposed manufacturing system designs. Unfortunately, the real systems are often found to perform quite differently from simulation predictions and one possible reason for this is an over-simplistic representation of workers' behaviour within current simulation techniques. The accuracy of design predictions could be improved through a modelling tool that integrates with computer-based simulation and incorporates the factors and relationships that determine workers' performance. This paper explores the viability of developing a similar tool based on our previously published theoretical modelling framework. It focuses on evolving this purely theoretical framework towards a practical modelling tool that can actually be used to expand the capabilities of current simulation techniques. Based on an industrial study, the paper investigates how the theoretical framework works in practice, analyses strengths and weaknesses in its formulation, and proposes developments that can contribute towards enabling human performance modelling in a practical way

    Standard Colonic Lavage Alters the Natural State of Mucosal-Associated Microbiota in the Human Colon

    Get PDF
    Past studies of the human intestinal microbiota are potentially confounded by the common practice of using bowel-cleansing preparations. We examined if colonic lavage changes the natural state of enteric mucosal-adherent microbes in healthy human subjects.Twelve healthy individuals were divided into three groups; experimental group, control group one, and control group two. Subjects in the experimental group underwent an un-prepped flexible sigmoidoscopy with biopsies. Within two weeks, subjects were given a standard polyethylene glycol-based bowel cleansing preparation followed by a second flexible sigmoidoscopy. Subjects in control group one underwent two un-prepped flexible sigmoidoscopies within one week. Subjects in the second control group underwent an un-prepped flexible sigmoidoscopy followed by a second flexible sigmoidoscopy after a 24-hour clear liquid diet within one week. The mucosa-associated microbial communities from the two procedures in each subject were compared using 16S rRNA gene based terminal restriction fragment length polymorphism (T-RFLP), and library cloning and sequencing.Clone library sequencing analysis showed that there were changes in the composition of the mucosa-associated microbiota in subjects after colonic lavage. These changes were not observed in our control groups. Standard bowel preparation altered the diversity of mucosa-associated microbiota. Taxonomic classification did not reveal significant changes at the phylum level, but there were differences observed at the genus level.Standard bowel cleansing preparation altered the mucosal-adherent microbiota in all of our subjects, although the degree of change was variable. These findings underscore the importance of considering the confounding effects of bowel preparation when designing experiments exploring the gut microbiota

    An Administrative Claims Model for Profiling Hospital 30-Day Mortality Rates for Pneumonia Patients

    Get PDF
    Outcome measures for patients hospitalized with pneumonia may complement process measures in characterizing quality of care. We sought to develop and validate a hierarchical regression model using Medicare claims data that produces hospital-level, risk-standardized 30-day mortality rates useful for public reporting for patients hospitalized with pneumonia.Retrospective study of fee-for-service Medicare beneficiaries age 66 years and older with a principal discharge diagnosis of pneumonia. Candidate risk-adjustment variables included patient demographics, administrative diagnosis codes from the index hospitalization, and all inpatient and outpatient encounters from the year before admission. The model derivation cohort included 224,608 pneumonia cases admitted to 4,664 hospitals in 2000, and validation cohorts included cases from each of years 1998-2003. We compared model-derived state-level standardized mortality estimates with medical record-derived state-level standardized mortality estimates using data from the Medicare National Pneumonia Project on 50,858 patients hospitalized from 1998-2001. The final model included 31 variables and had an area under the Receiver Operating Characteristic curve of 0.72. In each administrative claims validation cohort, model fit was similar to the derivation cohort. The distribution of standardized mortality rates among hospitals ranged from 13.0% to 23.7%, with 25(th), 50(th), and 75(th) percentiles of 16.5%, 17.4%, and 18.3%, respectively. Comparing model-derived risk-standardized state mortality rates with medical record-derived estimates, the correlation coefficient was 0.86 (Standard Error = 0.032).An administrative claims-based model for profiling hospitals for pneumonia mortality performs consistently over several years and produces hospital estimates close to those using a medical record model

    Factors Influencing the Statistical Power of Complex Data Analysis Protocols for Molecular Signature Development from Microarray Data

    Get PDF
    Critical to the development of molecular signatures from microarray and other high-throughput data is testing the statistical significance of the produced signature in order to ensure its statistical reproducibility. While current best practices emphasize sufficiently powered univariate tests of differential expression, little is known about the factors that affect the statistical power of complex multivariate analysis protocols for high-dimensional molecular signature development.We show that choices of specific components of the analysis (i.e., error metric, classifier, error estimator and event balancing) have large and compounding effects on statistical power. The effects are demonstrated empirically by an analysis of 7 of the largest microarray cancer outcome prediction datasets and supplementary simulations, and by contrasting them to prior analyses of the same data.THE FINDINGS OF THE PRESENT STUDY HAVE TWO IMPORTANT PRACTICAL IMPLICATIONS: First, high-throughput studies by avoiding under-powered data analysis protocols, can achieve substantial economies in sample required to demonstrate statistical significance of predictive signal. Factors that affect power are identified and studied. Much less sample than previously thought may be sufficient for exploratory studies as long as these factors are taken into consideration when designing and executing the analysis. Second, previous highly-cited claims that microarray assays may not be able to predict disease outcomes better than chance are shown by our experiments to be due to under-powered data analysis combined with inappropriate statistical tests

    Data-driven approach for creating synthetic electronic medical records

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>New algorithms for disease outbreak detection are being developed to take advantage of full electronic medical records (EMRs) that contain a wealth of patient information. However, due to privacy concerns, even anonymized EMRs cannot be shared among researchers, resulting in great difficulty in comparing the effectiveness of these algorithms. To bridge the gap between novel bio-surveillance algorithms operating on full EMRs and the lack of non-identifiable EMR data, a method for generating complete and synthetic EMRs was developed.</p> <p>Methods</p> <p>This paper describes a novel methodology for generating complete synthetic EMRs both for an outbreak illness of interest (tularemia) and for background records. The method developed has three major steps: 1) synthetic patient identity and basic information generation; 2) identification of care patterns that the synthetic patients would receive based on the information present in real EMR data for similar health problems; 3) adaptation of these care patterns to the synthetic patient population.</p> <p>Results</p> <p>We generated EMRs, including visit records, clinical activity, laboratory orders/results and radiology orders/results for 203 synthetic tularemia outbreak patients. Validation of the records by a medical expert revealed problems in 19% of the records; these were subsequently corrected. We also generated background EMRs for over 3000 patients in the 4-11 yr age group. Validation of those records by a medical expert revealed problems in fewer than 3% of these background patient EMRs and the errors were subsequently rectified.</p> <p>Conclusions</p> <p>A data-driven method was developed for generating fully synthetic EMRs. The method is general and can be applied to any data set that has similar data elements (such as laboratory and radiology orders and results, clinical activity, prescription orders). The pilot synthetic outbreak records were for tularemia but our approach may be adapted to other infectious diseases. The pilot synthetic background records were in the 4-11 year old age group. The adaptations that must be made to the algorithms to produce synthetic background EMRs for other age groups are indicated.</p
    • …
    corecore