63 research outputs found

    Local biodiversity is higher inside than outside terrestrial protected areas worldwide

    Get PDF
    Protected areas are widely considered essential for biodiversity conservation. However, few global studies have demonstrated that protection benefits a broad range of species. Using a new global biodiversity database with unprecedented geographic and taxonomic coverage, we compare four biodiversity measures at sites sampled in multiple land uses inside and outside protected areas. Globally, species richness is 10.7% higher and abundance 14.5% higher in samples taken inside protected areas compared to samples taken outside, but neither rarefaction-based richness nor endemicity differ significantly. Importantly, we show that the positive effects of protection are mostly attributable to differences in land use between protected and unprotected sites. Nonetheless, even within some human-dominated land uses, species richness and abundance are higher in protected sites. Our results reinforce the global importance of protected areas but suggest that protection does not consistently benefit species with small ranges or increase the variety of ecological niches

    Comorbidity, age, race and stage at diagnosis in colorectal cancer: a retrospective, parallel analysis of two health systems

    Get PDF
    © 2008 Zafar et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Background : Stage at diagnosis plays a significant role in colorectal cancer (CRC) survival. Understanding which factors contribute to a more advanced stage at diagnosis is vital to improving overall survival. Comorbidity, race, and age are known to impact receipt of cancer therapy and survival, but the relationship of these factors to stage at diagnosis of CRC is less clear. The objective of this study is to investigate how comorbidity, race and age influence stage of CRC diagnosis. Methods : Two distinct healthcare populations in the United States (US) were retrospectively studied. Using the Cancer Care Outcomes Research and Surveillance Consortium database, we identified CRC patients treated at 15 Veterans Administration (VA) hospitals from 2003–2007. We assessed metastatic CRC patients treated from 2003–2006 at 10 non-VA, fee-for-service (FFS) practices. Stage at diagnosis was dichotomized (non-metastatic, metastatic). Race was dichotomized (white, non-white). Charlson comorbidity index and age at diagnosis were calculated. Associations between stage, comorbidity, race, and age were determined by logistic regression. Results : 342 VA and 340 FFS patients were included. Populations differed by the proportion of patients with metastatic CRC at diagnosis (VA 27% and FFS 77%) reflecting differences in eligibility criteria for inclusion. VA patients were mean (standard deviation; SD) age 67 (11), Charlson index 2.0 (1.0), and were 63% white. FFS patients were mean age 61 (13), Charlson index 1.6 (1.0), and were 73% white. In the VA cohort, higher comorbidity was associated with earlier stage at diagnosis after adjusting for age and race (odds ratio (OR) 0.76, 95% confidence interval (CI) 0.58–1.00; p = 0.045); no such significant relationship was identified in the FFS cohort (OR 1.09, 95% CI 0.82–1.44; p = 0.57). In both cohorts, no association was found between stage at diagnosis and either age or race. Conclusion : Higher comorbidity may lead to earlier stage of CRC diagnosis. Multiple factors, perhaps including increased interactions with the healthcare system due to comorbidity, might contribute to this finding. Such increased interactions are seen among patients within a healthcare system like the VA system in the US versus sporadic interactions which may be seen with FFS healthcare

    Management control systems in innovation companies: A literature based framework

    Get PDF
    Past research has traditionally argued that management control systems (MCSs) may present a hindrance to the creativity of innovation companies. This theoretical paper surveys the literature to focus an investigation on the MCSs of innovation companies. Within the object of control paradigm the paper develops and presents a theoretical model of the impact of eleven external, organisational and innovation related contingency factors on the MCSs in companies that engage in innovation activities. We also suggest measures for further empirical research. By formulating hypotheses on 43 potential interactions the model predicts contradictory influences on two direct control categories, results and action control, but stresses the importance of two indirect categories, personnel and cultural control. More specifically, the high levels of technological complexity and innovation capability in this type of company are expected to be negatively associated with the application of results and action control, whereas personnel and cultural seem to be more appropriate. Furthermore, important sources of finance, venture capital and public funding, are both hypothesised to be positively associated with the application of results, action and personnel control; whereas only public funding is predicted to be positively related to the application of cultural control. The principal contribution of this paper lies in synthesising the literature to provide a model of the impact of a unique set of eleven contingency factors for innovation companies on a broad scope of controls. In addition, the contingency model, if empirically validated, would add value by inferring the particular forms of management control which would be beneficial in innovative company settings. © 2014 Springer-Verlag Berlin Heidelberg

    Modifying effect of dual antiplatelet therapy on incidence of stent thrombosis according to implanted drug-eluting stent type

    Get PDF
    Aim To investigate the putative modifying effect of dual antiplatelet therapy (DAPT) use on the incidence of stent thrombosis at 3 years in patients randomized to Endeavor zotarolimus-eluting stent (E-ZES) or Cypher sirolimus-eluting stent (C-SES). Methods and results Of 8709 patients in PROTECT, 4357 were randomized to E-ZES and 4352 to C-SES. Aspirin was to be given indefinitely, and clopidogrel/ticlopidine for ≥3 months or up to 12 months after implantation. Main outcome measures were definite or probable stent thrombosis at 3 years. Multivariable Cox regression analysis was applied, with stent type, DAPT, and their interaction as the main outcome determinants. Dual antiplatelet therapy adherence remained the same in the E-ZES and C-SES groups (79.6% at 1 year, 32.8% at 2 years, and 21.6% at 3 years). We observed a statistically significant (P = 0.0052) heterogeneity in treatment effect of stent type in relation to DAPT. In the absence of DAPT, stent thrombosis was lower with E-ZES vs. C-SES (adjusted hazard ratio 0.38, 95% confidence interval 0.19, 0.75; P = 0.0056). In the presence of DAPT, no difference was found (1.18; 0.79, 1.77; P = 0.43). Conclusion A strong interaction was observed between drug-eluting stent type and DAPT use, most likely prompted by the vascular healing response induced by the implanted DES system. These results suggest that the incidence of stent thrombosis in DES trials should not be evaluated independently of DAPT use, and the optimal duration of DAPT will likely depend upon stent type (Clinicaltrials.gov number NCT00476957

    Drug-drug interactions and QT prolongation as a commonly assessed cardiac effect - comprehensive overview of clinical trials

    Full text link

    Co-infection of cattle with Fasciola hepatica or F. gigantica and Mycobacterium bovis: A systematic review

    Get PDF
    The liver flukes, Fasciola hepatica and F. gigantica, are common trematode parasites of livestock. F. hepatica is known to modulate the immune response, including altering the response to co-infecting pathogens. Bovine tuberculosis (bTB), caused by Mycobacterium bovis, is a chronic disease which is difficult to control and is of both animal welfare and public health concern. Previous research has suggested that infection with liver fluke may affect the accuracy of the bTB skin test, but direction of the effect differs between studies. In a systematic review of the literature, all experimental and observational studies concerning co-infection with these two pathogens were sought. Data were extracted on the association between fluke infection and four measures of bTB diagnosis or pathology, namely, the bTB skin test, interferon γ test, lesion detection and culture/bacterial recovery. Of a large body of literature dating from 1950 to 2019, only thirteen studies met the inclusion criteria. These included studies of experimentally infected calves, case control studies on adult cows, cross sectional abattoir studies and a herd level study. All the studies had a medium or high risk of bias. The balance of evidence from the 13 studies included in the review suggests that liver fluke exposure was associated with either no effect or a decreased response to all of the four aspects of bTB diagnosis assessed: skin test, IFN γ, lesion detection and mycobacteria cultured or recovered. Most studies showed a small and/or non-significant effect so the clinical and practical importance of the observed effect is likely to be modest, although it could be more significant in particular groups of animals, such as dairy cattle

    Suboptimal herd performance amplifies the spread of infectious disease in the cattle industry

    Get PDF
    Farms that purchase replacement breeding cattle are at increased risk of introducing many economically important diseases. The objectives of this analysis were to determine whether the total number of replacement breeding cattle purchased by individual farms could be reduced by improving herd performance and to quantify the effects of such reductions on the industry-level transmission dynamics of infectious cattle diseases. Detailed information on the performance and contact patterns of British cattle herds was extracted from the national cattle movement database as a case example. Approximately 69% of beef herds and 59% of dairy herds with an average of at least 20 recorded calvings per year purchased at least one replacement breeding animal. Results from zero-inflated negative binomial regression models revealed that herds with high average ages at first calving, prolonged calving intervals, abnormally high or low culling rates, and high calf mortality rates were generally more likely to be open herds and to purchase greater numbers of replacement breeding cattle. If all herds achieved the same level of performance as the top 20% of herds, the total number of replacement beef and dairy cattle purchased could be reduced by an estimated 34% and 51%, respectively. Although these purchases accounted for only 13% of between-herd contacts in the industry trade network, they were found to have a disproportionately strong influence on disease transmission dynamics. These findings suggest that targeting extension services at herds with suboptimal performance may be an effective strategy for controlling endemic cattle diseases while simultaneously improving industry productivity
    corecore